What Brought GPT and Claude Together? Opposing the Pentagon?

marsbitPublished on 2026-02-28Last updated on 2026-02-28

Abstract

The article discusses the unexpected alignment between rival AI companies OpenAI and Anthropic, driven by their shared ethical stance against the U.S. Department of Defense's demands. Anthropic, the maker of Claude, had signed a $200 million contract with the Pentagon but insisted on two red lines: no mass surveillance of U.S. citizens and no autonomous weapons without human oversight. When the Pentagon demanded unrestricted use, Anthropic refused, citing ethical concerns. In a show of solidarity, over 400 employees from OpenAI and Google signed an open letter supporting Anthropic’s position. OpenAI’s CEO also internally affirmed similar principles. However, this unity was short-lived. After Anthropic held its ground and rejected the Pentagon’s ultimatum, it was labeled a "supply chain security risk," effectively barring it from all federal contracts. Meanwhile, OpenAI secured the Pentagon contract by accepting less stringent terms, agreeing not to engage in mass surveillance or autonomous weapons but without pushing for additional legal safeguards. The piece highlights the political and ideological dimensions of the conflict, noting that Anthropic’s stance was perceived as "woke" and ideologically driven, while OpenAI’s more pragmatic approach was rewarded. The outcome signals the high cost of resisting government pressure in the AI industry and raises questions about the real-world value of ethical principles when faced with political and economic consequences.

Author: Kuli, Deep Tide TechFlow

A few days ago, a photo went viral online.

India held an AI summit, with Prime Minister Modi on stage, flanked by a row of Silicon Valley bigwigs. During the group photo, Modi raised the hand of the person next to him overhead, and others followed suit, holding hands, presenting a scene of unity.

But, only two people did not hold hands.

The CEOs of OpenAI and Anthropic, the bosses behind ChatGPT and Claude respectively, stood next to each other, each raising a fist.

No hand-holding, no eye contact, like two rivals forced to share a desk by the teacher.

These two companies have been fiercely competing in recent years. Claude was created by a team that split from OpenAI. They fight for users, enterprise clients, funding, and during this year's Super Bowl, Anthropic even paid for ads mocking ChatGPT for planning to introduce advertisements.

So, not holding hands? Normal.

However, today they joined hands. Because of the Pentagon.

Here's what happened.

Anthropic, the company behind Claude, signed a contract with the U.S. Department of Defense last year worth up to $200 million. Claude became the first AI model deployed on the U.S. military's classified networks, assisting with tasks like intelligence analysis and mission planning.

But Anthropic drew two red lines in the contract:

Claude cannot be used for mass surveillance of U.S. citizens, nor for autonomous weapons without human involvement. (Reference reading: Anthropic's 72-Hour Identity Crisis)

However, the Pentagon did not accept this.

Their demand was four words: without restrictions. They bought the tool, so they should be able to use it freely. Why should a tech company dictate what the U.S. military can or cannot do?

Last Tuesday, Defense Secretary Hegseth gave Anthropic's CEO an ultimatum in person: agree by 5:01 PM Friday, or face the consequences.

Anthropic did not agree.

Their CEO issued a public statement,大意是: We deeply understand the importance of AI for U.S. defense, but in a few cases, AI can harm rather than defend democratic values. We cannot in good conscience accept this demand.

The Pentagon's negotiator, Deputy Secretary of Defense Emil Michael, subsequently called him a liar on social media, saying he had a God complex and was playing games with national security.

A Brief Alliance

Then, something unexpected happened.

Employees from OpenAI and Google, over 400 people in total, signed a joint open letter titled "We Will Not Be Divided".

The letter stated that the Pentagon was negotiating with AI companies one by one, trying to get others to agree to the conditions Anthropic refused, using fear to divide each company.

OpenAI's CEO also sent an internal letter to all employees, saying OpenAI has the same red lines as Anthropic:

No mass surveillance, no autonomous lethal weapons.

The two companies that refused to hold hands just days ago suddenly found themselves on the same side because of the Pentagon.

However, this unity probably only lasted a few hours.

At 5:01 PM Friday, the Pentagon's ultimatum expired. Anthropic did not sign.

A U.S. tech company valued at $380 billion risked voiding a $200 million contract and refused the U.S. Department of Defense. In the past, this would have at most resulted in a contract termination and finding a new supplier. But this time, Washington's reaction was far from just commercial.

Trump posted on Truth Social about an hour later, calling Anthropic "left-wing lunatics," saying they were trying to place themselves above the Constitution and gamble with American soldiers' lives.

He demanded all federal agencies immediately stop using Anthropic's technology.

Shortly after, U.S. Defense Secretary Hegseth announced labeling Anthropic as a "supply chain security risk." This label is usually reserved for companies like Huawei. The message was clear: any contractor doing business with the U.S. military could no longer use Anthropic's products.

Anthropic said they would take legal action.

And that same evening, OpenAI, which had previously maintained a united front, signed an agreement with the Pentagon.

An Ideological Problem

What did OpenAI get?

The position left vacant by Claude's removal: the AI supplier for the U.S. military's classified networks. However, OpenAI presented three conditions to the Pentagon: no mass surveillance, no autonomous weapons, and human involvement required for high-risk decisions.

The Pentagon said, okay.

You read that right. Conditions that Anthropic argued over for weeks and couldn't get accepted were agreed upon with another company in a matter of days?

Of course, the proposals weren't exactly the same.

Anthropic wanted an extra layer: they argued that current laws can't keep up with AI capabilities. For example, AI can legally purchase, aggregate your location data, browsing history, social media information—effectively achieving surveillance—with each step being legal.

Anthropic said just writing "no surveillance" is useless; this loophole needs to be closed. OpenAI did not insist on this point; they accepted the Pentagon's argument that existing laws are sufficient.

But if you think this was just a disagreement over terms, you'd be naive. This negotiation was never just about the terms from the start.

White House AI czar David Sacks had already publicly criticized Anthropic for promoting "woke AI" (ideology first, political correctness); senior Pentagon officials told the media that Dario's (Anthropic CEO Dario Amodei) problem was ideologically driven, "we know who we are dealing with."

Elon Musk's xAI, a direct competitor to Anthropic, repeatedly attacked Anthropic on X this week, saying the company "hates Western civilization."

And Anthropic's CEO did not attend Trump's inauguration last year. OpenAI's CEO did.

Making an Example

So let's summarize what happened.

The same principles, the same red lines. Anthropic, for asking for an extra layer of protection, being on the wrong side, striking the wrong posture, was labeled a U.S. national security threat on the same level as Huawei.

OpenAI asked for one less layer, had better relations, and got the contract. Is this a triumph of principles, or a price tag on principles?

This isn't the first time a Pentagon contract has faced抵制 (dǐzhì - resistance/boycott).

In 2018, over 4000 Google employees signed a petition, with more than a dozen resigning, protesting the company's involvement in a Pentagon project called Project Maven. That project used AI to analyze drone footage, helping the military identify targets faster.

Google eventually withdrew. They didn't renew and left. The employees won.

8 years have passed, and a similar controversy has arisen again. But this time the rules have completely changed. A U.S. company said it could do business with the military, but there were two things it wouldn't do. The U.S. government's response was to kick it out of the entire federal system.

And the杀伤力 (shāshānglì - lethality/damaging effect) of the "supply chain security risk" label far exceeds losing a $200 million contract.

Anthropic's revenue this year is estimated to be around $14 billion; the $200 million contract isn't even a fraction. But this label means any company doing business with the U.S. military cannot use Claude.

These companies don't need to agree with the Pentagon's stance; they just need to do a risk assessment: continue using Claude and potentially lose government contracts; switch to another model, and face no issues.

The choice is easy. This is the real signal of this event.

Whether Anthropic can withstand this is不重要 (bù zhòngyào - not important). What matters is whether the next company will dare to resist. It will look at this outcome, see the cost of sticking to principles, and then make a very rational decision.

Looking back at that photo from India, everyone holding hands raised overhead, only those two with their own fists clenched.

Perhaps this is the norm.

AI companies can share the same principles, but their hands won't necessarily join.

Related Questions

QWhat was the reason behind the temporary alliance between OpenAI and Anthropic as described in the article?

AThey temporarily aligned because both companies opposed the Pentagon's demand for unrestricted use of AI, specifically refusing to allow their AI models to be used for mass surveillance of U.S. citizens or autonomous weapons without human involvement.

QWhat specific conditions did Anthropic set in its contract with the Pentagon, and why did the deal fall apart?

AAnthropic set two red lines: their AI Claude must not be used for mass surveillance of U.S. citizens or for autonomous weapons without human control. The deal fell apart because the Pentagon demanded unrestricted use and rejected these conditions, leading Anthropic to refuse the contract on principle.

QHow did the U.S. government retaliate against Anthropic after it refused the Pentagon's terms?

AThe U.S. government, through the Defense Secretary, labeled Anthropic a 'supply chain security risk,' effectively barring any contractors working with the U.S. military from using Anthropic's products. Former President Trump also called for all federal agencies to stop using Anthropic's technology.

QWhat differentiated OpenAI's agreement with the Pentagon from Anthropic's failed negotiation?

AOpenAI accepted the Pentagon's terms with similar red lines (no mass surveillance, no autonomous weapons, human oversight for high-risk decisions) but did not insist on additional legal safeguards against potential loopholes, which Anthropic had demanded. OpenAI's better political alignment and relationship with the administration also played a role.

QWhat broader implication does the article suggest about the future of AI companies working with the U.S. government?

AThe article suggests that the U.S. government will strongly enforce compliance with its demands, and companies refusing on ethical grounds may face severe repercussions, such as being blacklisted. This could deter other AI firms from resisting government contracts, prioritizing business survival over ethical principles.

Related Reads

Trading

Spot
Futures
活动图片