Author: Kuli, Deep Tide TechFlow
A few days ago, a photo went viral online.
India held an AI summit, with Prime Minister Modi on stage, flanked by a row of Silicon Valley bigwigs. During the group photo, Modi raised the hand of the person next to him overhead, and others followed suit, holding hands, presenting a scene of unity.
But, only two people did not hold hands.
The CEOs of OpenAI and Anthropic, the bosses behind ChatGPT and Claude respectively, stood next to each other, each raising a fist.
No hand-holding, no eye contact, like two rivals forced to share a desk by the teacher.
These two companies have been fiercely competing in recent years. Claude was created by a team that split from OpenAI. They fight for users, enterprise clients, funding, and during this year's Super Bowl, Anthropic even paid for ads mocking ChatGPT for planning to introduce advertisements.
So, not holding hands? Normal.
However, today they joined hands. Because of the Pentagon.
Here's what happened.
Anthropic, the company behind Claude, signed a contract with the U.S. Department of Defense last year worth up to $200 million. Claude became the first AI model deployed on the U.S. military's classified networks, assisting with tasks like intelligence analysis and mission planning.
But Anthropic drew two red lines in the contract:
Claude cannot be used for mass surveillance of U.S. citizens, nor for autonomous weapons without human involvement. (Reference reading: Anthropic's 72-Hour Identity Crisis)
However, the Pentagon did not accept this.
Their demand was four words: without restrictions. They bought the tool, so they should be able to use it freely. Why should a tech company dictate what the U.S. military can or cannot do?
Last Tuesday, Defense Secretary Hegseth gave Anthropic's CEO an ultimatum in person: agree by 5:01 PM Friday, or face the consequences.
Anthropic did not agree.
Their CEO issued a public statement,大意是: We deeply understand the importance of AI for U.S. defense, but in a few cases, AI can harm rather than defend democratic values. We cannot in good conscience accept this demand.
The Pentagon's negotiator, Deputy Secretary of Defense Emil Michael, subsequently called him a liar on social media, saying he had a God complex and was playing games with national security.
A Brief Alliance
Then, something unexpected happened.
Employees from OpenAI and Google, over 400 people in total, signed a joint open letter titled "We Will Not Be Divided".
The letter stated that the Pentagon was negotiating with AI companies one by one, trying to get others to agree to the conditions Anthropic refused, using fear to divide each company.
OpenAI's CEO also sent an internal letter to all employees, saying OpenAI has the same red lines as Anthropic:
No mass surveillance, no autonomous lethal weapons.
The two companies that refused to hold hands just days ago suddenly found themselves on the same side because of the Pentagon.
However, this unity probably only lasted a few hours.
At 5:01 PM Friday, the Pentagon's ultimatum expired. Anthropic did not sign.
A U.S. tech company valued at $380 billion risked voiding a $200 million contract and refused the U.S. Department of Defense. In the past, this would have at most resulted in a contract termination and finding a new supplier. But this time, Washington's reaction was far from just commercial.
Trump posted on Truth Social about an hour later, calling Anthropic "left-wing lunatics," saying they were trying to place themselves above the Constitution and gamble with American soldiers' lives.
He demanded all federal agencies immediately stop using Anthropic's technology.
Shortly after, U.S. Defense Secretary Hegseth announced labeling Anthropic as a "supply chain security risk." This label is usually reserved for companies like Huawei. The message was clear: any contractor doing business with the U.S. military could no longer use Anthropic's products.
Anthropic said they would take legal action.
And that same evening, OpenAI, which had previously maintained a united front, signed an agreement with the Pentagon.
An Ideological Problem
What did OpenAI get?
The position left vacant by Claude's removal: the AI supplier for the U.S. military's classified networks. However, OpenAI presented three conditions to the Pentagon: no mass surveillance, no autonomous weapons, and human involvement required for high-risk decisions.
The Pentagon said, okay.
You read that right. Conditions that Anthropic argued over for weeks and couldn't get accepted were agreed upon with another company in a matter of days?
Of course, the proposals weren't exactly the same.
Anthropic wanted an extra layer: they argued that current laws can't keep up with AI capabilities. For example, AI can legally purchase, aggregate your location data, browsing history, social media information—effectively achieving surveillance—with each step being legal.
Anthropic said just writing "no surveillance" is useless; this loophole needs to be closed. OpenAI did not insist on this point; they accepted the Pentagon's argument that existing laws are sufficient.
But if you think this was just a disagreement over terms, you'd be naive. This negotiation was never just about the terms from the start.
White House AI czar David Sacks had already publicly criticized Anthropic for promoting "woke AI" (ideology first, political correctness); senior Pentagon officials told the media that Dario's (Anthropic CEO Dario Amodei) problem was ideologically driven, "we know who we are dealing with."
Elon Musk's xAI, a direct competitor to Anthropic, repeatedly attacked Anthropic on X this week, saying the company "hates Western civilization."
And Anthropic's CEO did not attend Trump's inauguration last year. OpenAI's CEO did.
Making an Example
So let's summarize what happened.
The same principles, the same red lines. Anthropic, for asking for an extra layer of protection, being on the wrong side, striking the wrong posture, was labeled a U.S. national security threat on the same level as Huawei.
OpenAI asked for one less layer, had better relations, and got the contract. Is this a triumph of principles, or a price tag on principles?
This isn't the first time a Pentagon contract has faced抵制 (dǐzhì - resistance/boycott).
In 2018, over 4000 Google employees signed a petition, with more than a dozen resigning, protesting the company's involvement in a Pentagon project called Project Maven. That project used AI to analyze drone footage, helping the military identify targets faster.
Google eventually withdrew. They didn't renew and left. The employees won.
8 years have passed, and a similar controversy has arisen again. But this time the rules have completely changed. A U.S. company said it could do business with the military, but there were two things it wouldn't do. The U.S. government's response was to kick it out of the entire federal system.
And the杀伤力 (shāshānglì - lethality/damaging effect) of the "supply chain security risk" label far exceeds losing a $200 million contract.
Anthropic's revenue this year is estimated to be around $14 billion; the $200 million contract isn't even a fraction. But this label means any company doing business with the U.S. military cannot use Claude.
These companies don't need to agree with the Pentagon's stance; they just need to do a risk assessment: continue using Claude and potentially lose government contracts; switch to another model, and face no issues.
The choice is easy. This is the real signal of this event.
Whether Anthropic can withstand this is不重要 (bù zhòngyào - not important). What matters is whether the next company will dare to resist. It will look at this outcome, see the cost of sticking to principles, and then make a very rational decision.
Looking back at that photo from India, everyone holding hands raised overhead, only those two with their own fists clenched.
Perhaps this is the norm.
AI companies can share the same principles, but their hands won't necessarily join.










