Banned for 48 Hours, Claude Tops the App Store Charts

marsbitОпубликовано 2026-03-03Обновлено 2026-03-03

Введение

After being banned by the U.S. government for 48 hours, Claude surged to the top of the U.S. App Store, overtaking ChatGPT. The ban followed Anthropic's refusal to sign a Pentagon contract that would allow AI use for "all lawful purposes" without explicit bans on mass surveillance and autonomous weapons. OpenAI subsequently signed the contract, claiming similar safeguards through existing legal frameworks rather than contractual prohibitions. This triggered a user backlash: ChatGPT saw a 295% increase in uninstalls and a 775% spike in 1-star reviews, while Claude's downloads rose 51%. Anthropic capitalized on the moment by releasing a memory migration tool, allowing users to transfer their ChatGPT data seamlessly to Claude, and making memory feature free for all users. The incident highlighted a philosophical divide: OpenAI operates on a "what's illegal, we don’t do" principle, while Anthropic refuses to engage in activities it deems harmful, even if not yet illegal. Despite losing a $200 million contract and being labeled a "supply chain risk," Anthropic’s user base grew significantly, with free active users up 60% and daily sign-ups quadrupling. The company’s annualized revenue stands at $14 billion, with a $380 billion valuation.

On Saturday morning, Altman reposted a screenshot of an internal letter on X.

The letter was written to OpenAI employees on Thursday night, stating that the company was in talks with the Pentagon and that he hoped to help "de-escalate the situation." He reposted the letter with a few lines of explanation, essentially wanting to publicly clarify what had happened over the past few days.

By the time he posted this tweet, Claude had already risen to the number one spot on the US App Store's free chart. Just the day before, ChatGPT had been sitting in that position.

Sensor Tower's data recorded what happened in the following hours: on Saturday alone, ChatGPT's daily uninstalls in the US surged by 295%, and 1-star reviews skyrocketed by 775%. Meanwhile, Claude's downloads increased by 51% in a single day. A wave of "Cancel ChatGPT" posts appeared on Reddit, with users sharing screenshots of their canceled subscriptions; someone commented, "fastest install of my life." A website called QuitGPT.org went live, claiming 1.5 million people had already taken action.

On Monday, the influx of users was so large that Claude experienced a major outage. The company labeled a "supply chain security risk" by the federal government had its servers strained by the user surge.

A Precise Product Counterattack

On the same day the uninstall wave intensified, Anthropic launched a memory migration tool.

The feature itself is not complex. Users copy a prompt into ChatGPT, have it output all stored memories and preferences, then paste it into Claude. Claude imports it with one click, picking up from where you left off with ChatGPT. The website copy was just one sentence: "switch to Claude without starting over".

The timing of this tool is its most crucial attribute.

OpenAI's own data shows that by mid-2025, over 70% of ChatGPT usage scenarios were non-work related, including daily Q&A, writing, entertainment, and information searching. It was the first AI many people encountered, embedding itself into daily life through a vast plugin ecosystem, Voice Mode, and deeply integrated third-party applications. The switching cost for these users wasn't just "downloading a new app," but rather getting a new AI that doesn't know you to start understanding who you are from scratch. The accumulation of memory was previously the strongest reason to stay.

Anthropic's own research data shows that Claude's usage scenarios are highly concentrated. Programming and math tasks account for 34%, the single largest category, with education and research being the fastest-growing direction over the past year. The core users are developers, researchers, and heavy writers. This group is more rational and more likely to switch tools based on a clear value judgment, as long as the migration cost is low enough.

The memory migration tool minimized this cost. Simultaneously, Anthropic announced it would make the memory feature fully available to free users; this feature was previously exclusive to paid subscribers.

However, a significant portion of the incoming users were not originally Claude's target audience.

Judging from feedback on social media, many ordinary users migrating from ChatGPT often reacted upon first using Claude with: "It's different." Some felt Claude's responses were more profound, pushing back actively instead of agreeing to everything. Some found its writing cleaner, but it doesn't generate images or offer interactive experiences like Voice Mode.

Some who wanted a "more obedient ChatGPT alternative" found that Claude had a stronger personality, requiring time to adapt. A migration guide from TechRadar was widely shared these past few days, titled "Things I Wish Someone Had Told Me." The core message was: Claude and ChatGPT have fundamentally different usage logics; the former is more like a colleague with a stance, the latter more like a universal assistant.

This difference was originally part of each product's positioning but was unexpectedly amplified by this event. Users flocked to Claude due to moral stance, then discovered a product different from their expectations—a more discerning, more boundary-conscious AI. This could have been a reason to churn, but at this particular moment, it became a reason to stay: if you believe in a company's stance, you're more likely to accept its product's logic.

Days after launch, Anthropic released data: free active users grew over 60% compared to January, and daily new registrations quadrupled. Claude experienced an outage due to excessive traffic, with thousands of users reporting login issues, resolved within hours.

Three Words in the Contract: What OpenAI Said and Did

Anthropic was the first commercial company to deploy an AI model on the US military's classified network, a collaboration done through Palantir, with a contract worth approximately $200 million. But over the past months, the relationship deteriorated. The core dispute was a clause: the Pentagon demanded the AI model be open for "all lawful purposes" without any conditions. Anthropic insisted on writing two exceptions: not to be used for mass surveillance of US citizens, and not for fully autonomous weapon systems.

Around February 20, it was reported that an Anthropic executive questioned partner Palantir about Claude's usage in the US military's operation to capture Venezuelan President Maduro in January, which the military strongly disliked. On Thursday, the Pentagon issued an ultimatum, giving Dario Amodei until 5 PM that day to respond.

Amodei issued a statement before the deadline, saying the company could not accept the current terms, "not because we oppose military use, but because in a few cases, we believe AI has the potential to undermine rather than defend democratic values." Trump subsequently announced a six-month comprehensive suspension of Anthropic products by federal agencies, and Hegseth labeled it a "supply chain security risk," a tag usually for foreign adversary companies. The contract was terminated.

The vacant spot was quickly filled. Later that same day, OpenAI announced a contract with the Pentagon. In his internal letter on Thursday, Altman's stance was still clear. He wrote that this was already "an industry-wide problem," saying OpenAI and Anthropic shared the same "red lines": opposition to mass surveillance and autonomous weapons. On Friday, an agreement was reached to deploy models on the military's classified network, restricted to cloud operation, with engineers stationed for supervision, and claiming the same two restrictions were written into the contract.

Altman then opened up for questions on X, answering for hours. Someone asked him: Why did the Pentagon accept OpenAI but ban Anthropic? His response was: "Anthropic seemed more focused on specific prohibition language in the contract, rather than citing applicable law, and we are comfortable citing the law."

This statement was about a methodological difference, but it opened up the real controversy of the matter.

The key point of Anthropic's breakdown was the phrase the Pentagon insisted on including: AI systems can be used for "all lawful purposes." Anthropic refused because this phrase, in a national security context, is not a fixed boundary. Current laws haven't caught up with AI capabilities in many areas, and the scope of "lawful" would be determined by the government's own interpretation. OpenAI signed with this phrase, while claiming to have negotiated the same protections in the contract.

Legal experts later analyzed the publicly available contract terms from OpenAI and pointed out two specific wording issues.

The surveillance clause states the system shall not be used for "unconstrained" surveillance of US citizens' private information. Samir Jain, VP of Policy at the Center for Democracy & Technology, pointed out that this wording implies that a "constrained" version of surveillance is permitted. Under the current legal framework, the government can legally purchase citizens' location records, browsing history, and financial data from data brokers and have AI analyze this data, which technically does not constitute "illegal surveillance." Amodei used this exact example in a subsequent interview with CBS.

The weapons clause states the system shall not be used for autonomous weapons in "circumstances where law, regulation, or Department policy requires human control." This qualifier means the restriction only takes effect if other regulations already require human control, its binding force entirely borrowed from existing policies. And the Pentagon has the authority to modify its internal policies at any time. Legal scholar Charles Bullock wrote on X that the weapons clause in the contract relies on DoD Directive 3000.09, which requires commanders to retain "appropriate levels of human judgment," and this "appropriate level" is a standard that can be flexibly interpreted.

OpenAI's response to these质疑 (queries) was: the models can only run in the cloud, which architecturally precludes direct integration into weapon systems. The contract also cites specific legal bases, which are more binding than textual prohibitions because the law is a tested framework. Altman himself admitted in the Q&A: "If we need to fight that battle in the future, we will, but it obviously exposes us to some risk."

This isn't a matter of one company willing to compromise and another坚守原则 (sticking to principles); it's two fundamentally different safety philosophies. OpenAI's bottom line is: I won't do anything illegal. Anthropic's bottom line is: I also won't do things that the law hasn't yet prohibited but I believe should not be done.

This divergence also created a rift within OpenAI. Last week, several OpenAI employees signed an open letter supporting Anthropic's stance and opposing its labeling as a supply chain risk. Alignment researcher Leo Gao publicly questioned whether the company's contract provided sufficient protection. Criticisms appeared as chalk graffiti on the sidewalk outside OpenAI's San Francisco office. Supportive messages appeared outside Anthropic's office. Altman's hours-long X Q&A was, to some extent, aimed at those within his own company who originally sided with Anthropic.

Two Outcomes of the Same Narrative

Anthropic has long used "preventing civilization-level risks" to frame its safety mission, equating the potential threats of frontier AI with nuclear weapons and positioning itself as the gatekeeper on this line of defense. This narrative is core to its brand and its way of gaining trust in the capital markets.

Tech commentator Packy McCormick, during the发酵 (fermentation) of this event, cited a concept by Ben Thompson: Hype Tax. It means if you use extreme narratives to build your influence, you have to pay for it when that narrative meets real power. You compare AI technology to nuclear weapons, and the government will treat you like it treats nuclear weapons.

Anthropic paid the price for this narrative: it lost a contract, was labeled a security risk, was named by the President, and all its products were ordered to be removed from federal systems within six months.

But on the same weekend, the same narrative produced the exact opposite effect in another dimension.

Ordinary users didn't see contract clauses, legal interpretations, or debates over safety philosophy. They saw: one company said no and got kicked out by the government. Another company said yes and got the contract. They made their choice using their own judgment framework: 295% increase in uninstalls, App Store number one, overloaded servers.

This is a rare collective moral statement by consumers in AI industry history.

Anthropic didn't spend a single dollar of PR budget on this event. Amodei's statement was measured, not calling for user support, not naming OpenAI, not portraying itself as a martyr. But it happened.

There is a noteworthy detail: the event that drove users to Claude was essentially OpenAI doing something completely reasonable commercially—signing a deal when a competitor was banned and a contract was up for grabs—while claiming to have negotiated the same protections. Altman also explicitly said he did this partly to help cool the situation and prevent further harm to Anthropic.

Regardless of motive, the result was OpenAI got the contract, and Anthropic's user base grew. Both sides paid a price, both sides gained, just measured in different units.

One more thing is worth noting here.

The Pentagon contract Anthropic lost was worth about $200 million.

Anthropic's current annualized revenue is $14 billion. The target is to reach $26 billion within 2026.

Anthropic just completed a $30 billion Series E funding round last month, with a valuation of $380 billion.

This math isn't hard to do now. But another question remains unanswered: when AI is truly used on a large scale for military decision-making, will those "technical guardrails" written into contracts and the stationed engineers actually be effective, whether they are OpenAI's or those originally demanded by Anthropic.

This question is not in any publicly available contract.

Связанные с этим вопросы

QWhat event triggered the mass uninstallation of ChatGPT and the surge in Claude's downloads?

AThe trigger was OpenAI signing a contract with the Pentagon after Anthropic was banned and labeled a 'supply chain security risk' for refusing to agree to the military's terms without explicit prohibitions against mass surveillance and autonomous weapons.

QWhat key feature did Anthropic release to facilitate the user migration from ChatGPT?

AAnthropic released a memory migration tool that allowed users to easily transfer their stored memories and preferences from ChatGPT to Claude with a single prompt, significantly lowering the switching cost.

QWhat was the core philosophical difference between OpenAI and Anthropic regarding the Pentagon contract?

AOpenAI's底线 was to not do anything illegal, relying on existing legal frameworks. Anthropic's底线 was to also avoid doing things that are not yet illegal but that they believe should be prohibited, such as certain types of mass surveillance and autonomous weapons use.

QWhat was the financial impact of the user backlash on Anthropic's business?

ADespite losing a $200 million Pentagon contract, Anthropic's business saw massive growth. Its annualized revenue was $14 billion, aiming for $26 billion in 2026, and it had just completed a $30 billion funding round with a $380 billion valuation.

QHow did the user reaction demonstrate a collective moral stance in the AI industry?

AUsers massively uninstalled ChatGPT (295% increase in uninstalls, 775% spike in 1-star reviews) and installed Claude (51% daily download increase), leading it to become the #1 free app on the US App Store, all in response to the companies' differing ethical positions on the military contract.

Похожее

Торговля

Спот
Фьючерсы
活动图片