Safety Narrative Meets Reality Squeeze: How Anthropic Fell into an Identity Crisis?

比推Pubblicato 2026-02-27Pubblicato ultima volta 2026-02-27

Introduzione

In a span of seventy-two hours, Anthropic faced a severe identity crisis amid pressure from the U.S. Pentagon, public accusations from Elon Musk, and a major shift in its safety policy. The Pentagon issued an ultimatum: allow Claude to be used for "all lawful purposes," including autonomous weapons targeting and domestic mass surveillance, by Friday 5:01 PM, or risk losing a $2 billion contract and being blacklisted as a "supply chain risk." Anthropic initially resisted, citing ethical red lines. Simultaneously, Elon Musk accused Anthropic of large-scale training data theft, referencing a $1.5 billion settlement over using pirated books. Anthropic also accused three Chinese AI firms of "industrial-scale distillation attacks" on Claude, framing it as a national security threat—a move widely criticized as hypocritical. In a pivotal shift, Anthropic released its Responsible Scaling Policy (RSP) 3.0, removing its core commitment to halt training if safety measures were inadequate. The company cited competitive pressure and lack of industry-wide consensus as reasons. With a $380 billion valuation and rapid growth, Anthropic’s balancing act between its safety-brand identity and commercial-military demands appears increasingly unstable. Its narrative as a "responsible AI" leader is collapsing under political, competitive, and ethical pressures.

Author: Ada, Shenchao TechFlow

Original Title: Anthropic's 72-Hour Identity Crisis


February 24, Tuesday. Washington, the Pentagon.

Anthropic CEO Dario Amodei sat across from Defense Secretary Pete Hegseth. According to multiple media outlets including NPR and CNN citing informed sources, the atmosphere of the meeting was "polite," but the content was anything but.

Hegseth gave him an ultimatum: By 5:01 PM on Friday, lift the restrictions on Claude's military use, allowing the Pentagon to employ it for "all lawful purposes," including autonomous weapons targeting and domestic mass surveillance.

Otherwise, the $200 million contract would be canceled. The Defense Production Act would be invoked for compulsory requisition. Anthropic would be listed as a "supply chain risk," effectively blacklisting it alongside hostile entities from Russia and China.

On the same day, Anthropic released the third version of its "Responsible Scaling Policy" (RSP 3.0), quietly removing the company's core commitment since its founding: not to train more powerful models if safety measures could not be assured.

Also on the same day, Elon Musk posted on X: "Anthropic massively stole training data. This is a fact." Simultaneously, a Community Note on X supplemented this with a report that Anthropic had paid a $1.5 billion settlement for using pirated books to train Claude.

Within seventy-two hours, this AI company that claimed to have a "soul" simultaneously played three roles: safety martyr, intellectual property thief, and Pentagon traitor.

Which one is real?

Perhaps all of them.

The Pentagon's "Comply or Get Out"

The first layer of the story is simple.

Anthropic was the first AI company granted classified access by the U.S. Department of Defense. The contract, worth up to $200 million, was secured last summer. OpenAI, Google, and xAI subsequently secured contracts of similar scale.

According to Al Jazeera, Claude was used in a U.S. military operation in January of this year. The report stated the operation involved the kidnapping of Venezuelan President Maduro.

But Anthropic drew two red lines: no support for fully autonomous weapons targeting, and no support for mass surveillance of U.S. citizens. Anthropic argued that AI's reliability is insufficient for weapon control, and there are currently no laws or regulations governing AI's use in mass surveillance.

The Pentagon wasn't buying it.

White House AI advisor David Sacks publicly accused Anthropic on X last October of "using fear as a weapon and engaging in regulatory capture."

Competitors had already capitulated. OpenAI, Google, and xAI all agreed to let the military use their AI for "all lawful scenarios." Musk's Grok was just approved this week for entry into classified systems.

Anthropic was the last one standing.

As of publication, Anthropic stated in its latest announcement that it does not intend to concede. But the Friday 5:01 PM deadline is looming.

An anonymous former liaison between the Justice Department and the Defense Department expressed confusion to CNN: "How can you simultaneously declare a company a 'supply chain risk' and force that company to work for your military?"

Good question, but it's outside the Pentagon's consideration. What they care about is that if Anthropic doesn't comply, compulsory measures will be taken, or it will become a Washington pariah.

"Distillation Attack": A Slap-in-the-Face Accusation

On February 23, Anthropic published a fiercely worded blog post accusing three Chinese AI companies of carrying out an "industrial-scale distillation attack" on Claude.

The accused are DeepSeek, Moonshot AI, and MiniMax.

Anthropic accused them of using 24,000 fake accounts to initiate over 16 million interactions with Claude, specifically extracting its capabilities in agent reasoning, tool use, and programming.

Anthropic framed this as a national security threat, claiming that distilled models are "unlikely to retain safety guardrails" and could be used by authoritarian governments for cyber attacks, disinformation, and mass surveillance.

The narrative was perfect, the timing was perfect.

It came just after the Trump administration relaxed chip export controls to China, right when Anthropic needed ammunition for its lobbying stance on chip export controls.

But Musk fired a shot: "Anthropic massively stole training data and paid billions in settlement money for it. This is a fact."

Tory Green, co-founder of AI infrastructure company IO.Net, said: "You train your models on data from the entire web, and when others use your public API to learn from you, it's called a 'distillation attack'?"

Anthropic calls distillation an "attack," but it's commonplace in the AI industry. OpenAI uses it to compress GPT-4, Google uses it to optimize Gemini, even Anthropic itself does it. The only difference is, this time they were the ones being distilled.

As Singapore's Nanyang Technological University AI professor Erik Cambria told CNBC: "The boundary between legitimate use and malicious exploitation is often blurry."

More ironically, Anthropic just paid a $1.5 billion settlement for using pirated books to train Claude. It trains its models on data from the entire web, then accuses others of using its public API to learn from it. This isn't double standards, it's triple standards.

Anthropic wanted to play the victim, but got exposed as the defendant.

Dismantling the Safety Promise: RSP 3.0

On the same day as the Pentagon standoff and the Silicon Valley spat, Anthropic released the third version of its Responsible Scaling Policy.

Anthropic Chief Scientist Jared Kaplan told media in an interview: "We felt that stopping AI model training doesn't help anyone. In the context of rapid AI development, unilaterally making promises... while competitors are moving full speed ahead, it doesn't make sense."

In other words, others aren't playing by the rules, so we're dropping the act too.

The core of RSP 1.0 and 2.0 was a hard commitment to pause training if model capabilities exceeded the coverage of safety measures. This commitment gave Anthropic a unique reputation in the AI safety community.

But 3.0 removed it.

It was replaced with a more "flexible" framework, separating the measures Anthropic itself can take from the safety recommendations requiring industry-wide collaboration into two tracks. A report on risks would be issued every 3-6 months. External experts would review it.

Sounds responsible?

Independent reviewer Chris Painter from the nonprofit METR, after seeing an early draft of the policy, stated: "This indicates Anthropic believes it needs to enter 'triage mode' because methods for assessing and mitigating risks are not keeping pace with the growth in capabilities. This is more evidence that society is not prepared for AI's potential catastrophic risks."

According to TIME, Anthropic spent nearly a year internally debating this rewrite, with CEO Amodei and the board unanimously approving it. The official line is that the original policy was designed to foster industry consensus, but the industry simply didn't follow. The Trump administration adopted a laissez-faire attitude towards AI development, even attempting to repeal state-level regulations. Federal AI legislation is nowhere in sight. Although establishing a global governance framework seemed possible in 2023, three years later, that door has clearly closed.

An anonymous researcher long involved in AI governance put it more bluntly: "The RSP was Anthropic's most valuable brand asset. Deleting the training pause commitment is like an organic food company quietly tearing the 'organic' label off its packaging and then telling you their testing is now more transparent."

Identity Torn Under a $380 Billion Valuation

In early February, Anthropic completed a $30 billion financing round at a $380 billion valuation, with Amazon as the anchor investor. Since its founding, it has achieved $14 billion in annualized revenue. Over the past three years, this figure has grown more than 10x annually.

Simultaneously, the Pentagon threatens to blacklist it. Musk publicly accuses it of data theft. Its core safety promise is deleted. Anthropic's AI safety lead, Mrinank Sharma, resigned and wrote on X: "The world is in danger."

Contradiction?

Perhaps contradiction is in Anthropic's DNA.

The company was founded by former OpenAI executives because they were worried OpenAI was moving too fast on safety. Then they built a company themselves, creating more powerful models at an even faster pace, while telling the world how dangerous these models are.

The business model can be summarized in one sentence: we are more afraid of AI than anyone else, so you should pay us to build it.

This narrative worked perfectly in 2023-2024. AI safety was a hot term in Washington, and Anthropic was the most popular lobbyist.

In 2026, the winds changed.

"Woke AI" became an attack label, state-level AI regulation bills were blocked by the White House, and the California SB 53 supported by Anthropic was signed into law, but the federal level was a wasteland.

Anthropic's safety card is sliding from a "differentiating advantage" to a "political liability."

Anthropic is performing a complex balancing act. It needs to be "safe" enough to maintain its brand, yet "flexible" enough not to be abandoned by the market and the government. The problem is, the tolerance space on both ends is shrinking.

How Much is the Safety Narrative Worth?

Look at all three events together, and the picture becomes clear.

Accusing Chinese companies of distilling Claude is to strengthen the lobbying narrative for chip export controls. Deleting the safety pause commitment is to avoid falling behind in the arms race. Refusing the Pentagon's autonomous weapons demand is to preserve the last layer of moral clothing.

Each step has logic, but the steps contradict each other.

You can't simultaneously claim Chinese companies "distilling" your model is a national security threat, while deleting the promise preventing your own model from going out of control. If the model is truly that risky, you should be more cautious, not more aggressive.

Unless you are Anthropic.

In the AI industry, identity is not defined by your statements, but by your balance sheet. Anthropic's "safety" narrative is essentially a brand premium.

In the early days of the AI arms race, this premium was valuable. Investors were willing to pay a higher valuation for "responsible AI," governments were willing to give the green light to "trustworthy AI," customers were willing to pay for "safer AI."

But in 2026, this premium is evaporating.

Anthropic now faces not a multiple-choice question of "whether to compromise," but a sequencing problem of "who to compromise with first." Compromise with the Pentagon, brand damaged. Compromise with competitors, safety promise voided. Compromise with investors, both must give.

Friday at 5:01 PM, Anthropic will deliver its answer.

But whatever the answer is, one thing is certain: the Anthropic that once stood its ground with "we are different from OpenAI" is becoming like everyone else.

The endpoint of an identity crisis is often the disappearance of identity.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7615114

Domande pertinenti

QWhat ultimatum did the Pentagon give to Anthropic regarding the use of Claude AI?

AThe Pentagon gave Anthropic an ultimatum to remove restrictions on Claude's military use by Friday at 5:01 PM, allowing it to be used for 'all legitimate purposes,' including autonomous weapons targeting and domestic mass surveillance, or risk cancellation of a $200 million contract and being designated as a 'supply chain risk'.

QWhat significant change did Anthropic make in its Responsible Scaling Policy (RSP) 3.0?

AAnthropic removed its core commitment to halt training more powerful models if safety measures could not be guaranteed, replacing it with a more flexible framework that separates internal safety measures from industry-wide recommendations and includes periodic risk reports.

QWhat accusation did Elon Musk make against Anthropic on X?

AElon Musk accused Anthropic of 'massive theft of training data' and referenced a $1.5 billion settlement Anthropic paid for using pirated books to train Claude.

QWhy did Anthropic accuse three Chinese AI companies of 'distillation attacks'?

AAnthropic accused DeepSeek, Moonshot AI, and MiniMax of using 24,000 fake accounts to interact with Claude over 16 million times to extract its core capabilities in reasoning, tool use, and programming, framing it as a national security threat.

QWhat internal contradiction does the article highlight in Anthropic's actions?

AThe article points out that Anthropic's accusation of Chinese companies 'distilling' its model for security risks contradicts its own removal of safety pause commitments, and its refusal to allow autonomous weapons use conflicts with its accelerated model development, revealing a conflict between its safety narrative and competitive pressures.

Letture associate

You Bet on the News, the Pros Read the Rules: The True Cognitive Gap in Losing Money on Polymarket

The article explains that the key to profiting on Polymarket, a prediction market platform, lies not just predicting real-world events correctly, but in meticulously understanding the specific rules that govern how each market will be resolved. It illustrates this with examples, such as a market on Venezuela's 2026 leader, where the official rules defining "officially holds" the office overruled the intuitive answer of who was in practical control. Other examples include debates over the definition of a "token" or what constitutes an "agreement." The core argument is that a "reality vs. rules" gap creates pricing discrepancies that savvy traders ("车头" or "whales") exploit. The platform has a formal dispute resolution process managed by UMA token holders to settle ambiguous outcomes. This process involves proposal submission, a challenge window, a discussion period, and a final vote. However, the article highlights a critical flaw in this system compared to a traditional court: the lack of separation between the arbiters (UMA voters) and the interested parties (traders with financial stakes in the outcome). This conflict of interest undermines the discussion phase, leads to herd mentality, and results in opaque final decisions without explanatory rulings. Consequently, the system lacks a body of precedent, making it difficult for users to learn from past disputes. The ultimate takeaway is that success on Polymarket requires a lawyer-like scrutiny of the rules to identify and capitalize on the cognitive gap between how events appear and how they are contractually defined for settlement.

marsbit42 min fa

You Bet on the News, the Pros Read the Rules: The True Cognitive Gap in Losing Money on Polymarket

marsbit42 min fa

Will the Fed Still Cut Interest Rates? Tonight's Data Is Crucial

The core debate surrounding the Federal Reserve's potential interest rate cuts is intensifying amid geopolitical conflict and rebounding inflation. The key question is whether high energy prices will cause persistent inflation or weaken consumer demand enough to force the Fed to cut rates. Citigroup presents a bullish case for cuts, arguing that oil supply disruptions from the Strait of Hormuz are temporary and will not lead to lasting inflationary pressure. They point to receding bond yields and oil prices as evidence the market is pricing in a short-lived shock. Citi's data also shows tightening financial conditions, a stabilizing labor market, and healthy tax returns, supporting their view that the path to lower rates remains open. Conversely, Deutsche Bank offers a starkly contrasting, more hawkish outlook. They argue the Fed's current policy is already neutral and expect rates to remain unchanged indefinitely. Their view is based on stalled disinflation progress and a shift toward more hawkish rhetoric from key Fed officials like Waller, who cited risks from prolonged Middle East conflict and tariffs. Other officials, including Williams and Hammack, signaled rates would likely stay on hold for a "considerable time." The market pricing has shifted dramatically, now forecasting zero cuts in 2026. The imminent release of the March retail sales "control group" data is highlighted as a critical test. This metric, which excludes gas station sales, will reveal if high gasoline prices are eroding consumer spending in other areas. A weak reading could support the case for imminent rate cuts, while a strong one would bolster the argument for the Fed to hold steady. This data is pivotal for determining the near-term policy path.

marsbit1 h fa

Will the Fed Still Cut Interest Rates? Tonight's Data Is Crucial

marsbit1 h fa

Trading

Spot
Futures

Articoli Popolari

Come comprare ADA

Benvenuto in HTX.com! Abbiamo reso l'acquisto di Cardano (ADA) semplice e conveniente. Segui la nostra guida passo passo per intraprendere il tuo viaggio nel mondo delle criptovalute.Step 1: Crea il tuo Account HTXUsa la tua email o numero di telefono per registrarti il tuo account gratuito su HTX. Vivi un'esperienza facile e sblocca tutte le funzionalità,Crea il mio accountStep 2: Vai in Acquista crypto e seleziona il tuo metodo di pagamentoCarta di credito/debito: utilizza la tua Visa o Mastercard per acquistare immediatamente CardanoADA.Bilancio: Usa i fondi dal bilancio del tuo account HTX per fare trading senza problemi.Terze parti: abbiamo aggiunto metodi di pagamento molto utilizzati come Google Pay e Apple Pay per maggiore comodità.P2P: Fai trading direttamente con altri utenti HTX.Over-the-Counter (OTC): Offriamo servizi su misura e tassi di cambio competitivi per i trader.Step 3: Conserva Cardano (ADA)Dopo aver acquistato Cardano (ADA), conserva nel tuo account HTX. In alternativa, puoi inviare tramite trasferimento blockchain o scambiare per altre criptovalute.Step 4: Scambia Cardano (ADA)Scambia facilmente Cardano (ADA) nel mercato spot di HTX. Accedi al tuo account, seleziona la tua coppia di trading, esegui le tue operazioni e monitora in tempo reale. Offriamo un'esperienza user-friendly sia per chi ha appena iniziato che per i trader più esperti.

1.1k Totale visualizzazioniPubblicato il 2024.12.10Aggiornato il 2025.03.21

Come comprare ADA

Discussioni

Benvenuto nella Community HTX. Qui puoi rimanere informato sugli ultimi sviluppi della piattaforma e accedere ad approfondimenti esperti sul mercato. Le opinioni degli utenti sul prezzo di ADA ADA sono presentate come di seguito.

活动图片