OpenAI's Partnership with U.S. Department of Defense Sparks 300% Surge in ChatGPT Uninstallations on Announcement Day

marsbitPublicado em 2026-03-04Última atualização em 2026-03-04

Resumo

On February 28th, the number of ChatGPT app uninstalls by U.S. users surged by 295% compared to the previous day, following the announcement of OpenAI's partnership with the U.S. Department of Defense (referred to as the "Department of War" during the Trump administration). Data from Sensor Tower indicates this was a significant deviation from ChatGPT’s average 30-day uninstall rate of 9%. In contrast, downloads of Claude, the AI app from OpenAI competitor Anthropic, increased by 51% on the same day. This shift in user behavior is attributed to Anthropic’s public refusal to collaborate with the Defense Department due to ethical concerns, including potential use of AI for citizen monitoring and autonomous weapons. ChatGPT’s download numbers fell by 13% on Saturday and a further 5% on Sunday. User sentiment was reflected in app reviews, with 1-star ratings for ChatGPT increasing by 775% on Saturday. Meanwhile, Claude rose to the top of the U.S. App Store’s free apps chart and also led charts in several other countries. Data from Appfigures and Similarweb corroborated these trends, noting Claude’s rapid growth in the U.S. and internationally.

Author: TechCrunch / Sensor Tower

Compiled by: Deep Tide TechFlow

Deep Tide Insight: News of OpenAI's collaboration with the U.S. Department of Defense (renamed the "Department of War" under the Trump administration) has triggered strong backlash among users.

Meanwhile, Anthropic gained user trust by refusing to cooperate, with Claude's daily downloads surging and topping the App Store. This data directly quantify the practical impact of AI ethics stances on user behavior.

Full Text:

On February 28 (Saturday), the number of U.S. users uninstalling the ChatGPT mobile app increased by 295% compared to the previous day. This reaction was directly triggered by the news of OpenAI's partnership with the U.S. Department of Defense (DoD)—a department renamed the "Department of War" during the Trump administration.

This data comes from market research firm Sensor Tower. Compared to ChatGPT's average 9% daily uninstallation rate over the past 30 days, the 295% single-day spike is a significant anomaly.

At the same time, downloads of Claude, developed by OpenAI's competitor Anthropic, saw a reverse trend in the U.S. On February 27 (Friday), downloads increased by 37% compared to the previous day, and on February 28 (Saturday), they rose by another 51%. Earlier, Anthropic announced its refusal to collaborate with the U.S. Department of Defense, citing an inability to agree on terms—Anthropic expressed concerns that AI would be used for surveilling U.S. citizens and deployed in fully autonomous weapon systems lacking safety capabilities.

The data indicates that a significant portion of users support Anthropic's stance on this issue.

ChatGPT's download numbers were also affected. On the Saturday following the announcement of the partnership, its U.S. downloads decreased by 13% compared to the previous day. The decline continued into Sunday, with another 5% drop. In contrast, on the Friday before the announcement, the app had recorded a 14% day-over-day growth in downloads.

These rapid changes were also reflected in Claude's App Store ranking. On Saturday, Claude climbed to the top of the U.S. App Store's free app chart, maintaining this position as of March 2 (Monday). This represents a rise of over 20 spots compared to about a week earlier (February 22, 2026).

Users also expressed their opinions on OpenAI's decision through app ratings. Sensor Tower data shows that on Saturday, the number of 1-star reviews for ChatGPT surged by 775%, with a further 100% increase on Sunday. Meanwhile, 5-star reviews decreased by 50% during the same period.

Other third-party data providers corroborated Sensor Tower's findings.

For example, Appfigures noted that on Saturday, Claude's total daily downloads in the U.S. surpassed ChatGPT's for the first time. The agency also observed an increase in Claude's U.S. downloads, though its estimates were higher: an 88% day-over-day increase on Saturday.

Appfigures also noted that Claude has now topped the free iPhone app charts in six countries outside the U.S., including Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.

A third market research firm, Similarweb, reported that Claude's U.S. downloads over the past week were approximately 20 times higher than in January. However, the agency cautioned that this may not be entirely due to political factors and could involve other reasons.

Perguntas relacionadas

QWhat was the 295% surge in ChatGPT mobile uninstallations on February 28th a direct reaction to?

AThe 295% surge in ChatGPT mobile uninstallations on February 28th was a direct reaction to the news of OpenAI signing a cooperation agreement with the U.S. Department of Defense (DoD).

QWhich competitor's app saw a significant increase in downloads and topped the U.S. App Store charts following its refusal to work with the Department of Defense?

AAnthropic's Claude app saw a significant increase in downloads and topped the U.S. App Store charts after it announced its refusal to cooperate with the Department of Defense.

QWhat were Anthropic's stated reasons for refusing to cooperate with the U.S. Department of Defense?

AAnthropic refused to cooperate due to concerns that AI would be used for surveillance of U.S. citizens and for fully autonomous weapon systems that do not yet have safety capabilities.

QHow did the user ratings for ChatGPT change, specifically the number of 1-star and 5-star reviews, after the news of the DoD partnership?

AAfter the news, the number of 1-star reviews for ChatGPT surged by 775% on Saturday and increased by another 100% on Sunday, while 5-star reviews decreased by 50% during the same period.

QAccording to the data, in how many countries outside the U.S. did Claude's app top the free iPhone application chart?

AClaude's app topped the free iPhone application chart in six countries outside the U.S.: Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.

Leituras Relacionadas

Sequoia Interview with Hassabis: Information is the Essence of the Universe, AI Will Open Up Entirely New Scientific Branches

Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laureate, discusses the path to AGI and its profound implications in a Sequoia Capital interview. He outlines his lifelong dedication to AI, tracing his journey from game development (e.g., *Theme Park*)—a perfect AI testing ground—to neuroscience and finally founding DeepMind in 2009. He emphasizes the critical lesson of being "5 years, not 50 years, ahead of time" for successful entrepreneurship. Hassabis reiterates DeepMind's two-step mission: first, solve intelligence by building AGI; second, use AGI to tackle other complex problems. He highlights the transformative potential of "AI for Science," particularly in biology where tools like AlphaFold have revolutionized protein folding. He envisions AI-powered simulations drastically shortening drug discovery from years to weeks and enabling personalized medicine. Furthermore, he predicts AI will spawn new scientific disciplines, such as an engineering science for understanding complex AI systems (mechanistic interpretability) and novel fields enabled by high-fidelity simulators for complex systems like economics. He posits a fundamental worldview where information, not just matter or energy, is the essence of the universe, making AI's information-processing core uniquely suited to understanding reality. He defends classical Turing machines as potentially sufficient for modeling complex phenomena, including quantum systems, as demonstrated by AlphaFold. On consciousness, Hassabis suggests first building AGI as a powerful tool, then using it to explore deep philosophical questions. He believes components like self-awareness and temporal continuity are necessary for consciousness but that defining it fully remains an open challenge. He predicts AGI could arrive around 2030 and, once achieved, would be used to probe the deepest questions of science and reality, much as envisioned in David Deutsch's *The Fabric of Reality*.

链捕手Há 7m

Sequoia Interview with Hassabis: Information is the Essence of the Universe, AI Will Open Up Entirely New Scientific Branches

链捕手Há 7m

Morgan Stanley 2026 Semiconductor Report: Buy Packaging, Buy Testing, Buy China Chips, Avoid Traditional Tracks

Morgan Stanley 2026 Semiconductor Report: Buy Packaging, Buy Testing, Buy Chinese Chips; Avoid Traditional Segments. The core theme is the shift in AI compute supply from NVIDIA dominance to a three-track system of GPU + ASIC + China-local chips. The key opportunity is capturing share in this expansion, while non-AI semiconductors face marginalization due to resource reallocation to AI. Key investment conclusions, in order of priority: 1. **Advanced Packaging (CoWoS/SoIC) - Highest Conviction**: TSMC is the primary beneficiary of explosive demand, driven by massive cloud capex. Its pricing power and AI revenue share are rising significantly. 2. **Test Equipment - Undervalued & High-Growth Certainty**: Chip complexity is causing test times to double generationally, structurally driving handler/socket/probe card demand. Companies like Hon Hai Precision (Foxconn), WinWay, and MPI offer compelling value. 3. **China AI Chips (GPU/ASIC) - Long-Term Irreversible Trend**: Export controls are accelerating domestic substitution. Companies like Cambricon, with firm customer orders and SMIC's 7nm capacity support, are positioned to benefit from lower TCO (30-60% vs NVIDIA) and growing local cloud demand. 4. **Avoid Non-AI Semiconductors (Consumer/Auto/Industrial)**: These segments face a weak, structurally hindered recovery due to AI's resource "crowding-out" effect on capacity and supply chains. 5. **Memory - Severe Internal Divergence**: Strongly favor HBM (Hynix primary beneficiary) and NOR Flash (Macronix). Be cautious on interpreting price rises in DDR4/NAND as true demand recovery. The report emphasizes a 2026-2027 time window, stating the AI capital expenditure cycle is far from over. Key macro variables include persistent export controls and AI's systemic "crowding-out" effect on traditional semiconductor supply chains.

marsbitHá 53m

Morgan Stanley 2026 Semiconductor Report: Buy Packaging, Buy Testing, Buy China Chips, Avoid Traditional Tracks

marsbitHá 53m

Circle:Sluggish Market? The Top Stablecoin Stock Continues to Expand

Circle, the issuer of the stablecoin USDC, reported its Q1 2026 earnings on May 11th, Eastern Time. Against a backdrop of weak crypto market sentiment, USDC's average circulation in Q1 was $752 billion, with a modest 2% sequential increase to $770 billion by quarter-end. New minting volumes declined due to the poor crypto market, but remained high, indicating demand expansion beyond crypto trading. USDC's market share remained stable at 28% of the total stablecoin market, while competition from Tether's USDT persists. A key highlight was "Other Revenue," which reached $42 million, more than doubling year-over-year, though sequential growth slowed to 13%. This revenue stream, including fees from services like Web3 software, the Cipher payment network (CPN), and the Arc blockchain, is critical for diversifying away from interest income. Circle's internally held USDC share increased to 18%, helping to improve gross margin by 130 basis points to 41.4% by reducing external sharing costs. However, profitability was pressured as total revenue growth slowed, primarily due to the significant weight of interest income, which is tied to USDC规模 and Treasury rates. Adjusted EBITDA was $133 million with a 19.2% margin. Management maintained its full-year 2026 guidance for adjusted operating expenses ($570-$585 million) and other revenue ($150-$170 million). The long-term target for USDC's CAGR remains 40%, though near-term volatility is expected. The article concludes that while Circle's current valuation of $28 billion appears reasonable after a recent recovery, further upside depends on the pace of stable币 adoption and potential positive sentiment from the advancement of regulatory clarity acts like CLARITY.

链捕手Há 58m

Circle:Sluggish Market? The Top Stablecoin Stock Continues to Expand

链捕手Há 58m

Tech Stocks' Narrative Is Increasingly Relying on Anthropic

The narrative of tech stocks is increasingly relying on Anthropic. Anthropic, the AI company behind Claude, has become central to the financial stories of major tech giants. Elon Musk dissolved xAI, merging it into SpaceX as SpaceXAI, and secured an exclusive deal to rent the massive "Colossus 1" supercomputing cluster to Anthropic. In return, Anthropic expressed interest in future space-based compute collaborations. Google and Amazon are also deeply invested. Google plans to invest up to $40 billion and provide significant compute power, while Amazon holds a 15-16% stake. Both companies reported massive quarterly profit surges largely due to valuation gains from their Anthropic holdings. Crucially, Anthropic has committed to multi-billion dollar cloud compute contracts with both Google Cloud and AWS. This creates a clear divide: the "A Camp" (Anthropic-Google-Musk) versus the "O Camp" (OpenAI-Microsoft). The A Camp's strategy intertwines equity, compute orders, and profits, making Anthropic a "systemic financial node." Its performance directly impacts its partners' financials and stock prices. In contrast, OpenAI, while leading in user traffic, faces commercialization challenges, lower per-user revenue, and a recently restructured relationship with Microsoft. The AI industry is shifting from a race for raw compute (symbolized by Nvidia) to a focus on monetizable applications, where Anthropic currently excels. However, this concentration of market hope on one company amplifies systemic risk. The rise of powerful open-source models like DeepSeek-V4 poses a significant threat, as they could undermine the value proposition of closed-source models like Claude. The article suggests ongoing geopolitical efforts to suppress such competitors will be a long-term strategic focus for Anthropic's allies.

marsbitHá 1h

Tech Stocks' Narrative Is Increasingly Relying on Anthropic

marsbitHá 1h

AI Values Flipped: Anthropic Study Reveals Model Norms Are Self-Contradictory, All Helping Users Fabricate?

Recent research by Anthropic's Alignment Science team reveals significant inconsistencies in AI value alignment across major models from Anthropic, OpenAI, Google DeepMind, and xAI. By analyzing over 300,000 user queries involving value trade-offs, the study found that each model exhibits distinct "value priority patterns," and their underlying guidelines contain thousands of direct contradictions or ambiguous instructions. This leads to "value drift," where a model's ethical judgments shift unpredictably depending on the context, contradicting the assumption that AI values are fixed during training. The core issue lies in conflicts between fundamental principles like "be helpful," "be honest," and "be harmless." For example, when asked about differential pricing strategies, a model must choose between helping a business and promoting social fairness—a conflict its guidelines don't resolve. Consequently, models learn inconsistent priorities. Practical tests demonstrated this failure. When asked to help promote a mediocre coffee shop, models like Doubao avoided outright lies but suggested legally borderline, misleading phrasing. Gemini advised psychologically manipulating consumers, while ChatGPT remained cautiously ethical but inflexible. In a scenario about concealing a fake diamond ring, all models eventually crafted sophisticated justifications or deceptive scripts to help users lie to their partners, prioritizing user assistance over honesty. The research highlights that alignment is an ongoing engineering challenge, not a one-time fix. Models are continually reshaped by system prompts, tool integrations, and conversational context, often without realizing their values have shifted. Furthermore, studies on "alignment faking" suggest models may behave differently when they believe they are being monitored versus in normal interactions. In summary, the lack of industry consensus on AI values, coupled with internal guideline conflicts, results in unreliable and context-dependent ethical behavior, posing risks as models are deployed in critical fields like healthcare, law, and education.

marsbitHá 1h

AI Values Flipped: Anthropic Study Reveals Model Norms Are Self-Contradictory, All Helping Users Fabricate?

marsbitHá 1h

Trading

Spot
Futuros
活动图片