Editor's Note: Just hours after OpenAI announced its AI cooperation agreement with the Pentagon, the Pentagon had just terminated its cooperation with Anthropic on the grounds that Anthropic insisted on security terms. Subsequently, Anthropic CEO Dario Amodei sent an unusually fierce internal memo to employees, directly pointing out that most of the "security mechanisms" proclaimed by OpenAI are merely "security theater," and questioning its stance on autonomous weapons and mass surveillance.
In this approximately 1600-word email, Amodei not only disclosed some details of the negotiations between the two parties and the U.S. defense system but also directly targeted OpenAI CEO Sam Altman, accusing him of using public relations narratives to cover up the true structure of the cooperation. This controversy surrounding AI military applications, security red lines, and political relations is pushing the differences between the two major AI companies in Silicon Valley into the open.
The following is the original text:
I want to be very clear about the information OpenAI is currently releasing and the hypocrisy that exists within this information. This is their true practice, and I hope everyone can see it clearly.
Although there is still much we do not know about their contract with the War Department (DoW) (and they themselves may not even be fully aware, as the contract terms are likely quite vague), a few things are certain: from the public descriptions by Sam Altman and the War Department (of course, the contract text would be needed for final confirmation), their cooperation model is roughly as follows: the model itself has no legal restrictions on use, the so-called "all legal uses"; simultaneously, a so-called "security layer" is set up. In my view, this "security layer" is essentially the model's refusal mechanism, used to prevent the model from completing certain tasks or participating in certain applications.
The so-called "security layer" could also refer to the solution that partners (e.g., Palantir, Anthropic's commercial partner when serving U.S. government clients) tried to sell to us during negotiations. They proposed a classifier or machine learning system, claiming it could allow certain applications to pass while intercepting others. Furthermore, there are indications that OpenAI will assign employees (FDEs, or Frontline Deployment Engineers) to supervise the model's use to prevent improper applications.
Our overall assessment is: these solutions are not completely useless, but in the context of military applications, about 20% is real protection, 80% is security theater.
The root of the problem is: whether a model is used for mass surveillance or fully autonomous weapon systems often depends on broader contextual information. The model itself does not know what kind of system it is in; it does not know if a human is "in the loop" (the key issue for autonomous weapons); nor does it know the source of the data it is analyzing. For example, is it domestic U.S. data or foreign data, is it data provided by companies with user consent, or data purchased through gray channels, etc.
Personnel working in security are already deeply aware of this: model refusal mechanisms are unreliable. Jailbreak attacks are very common; often, one only needs to misrepresent the nature of the data to the model to bypass these restrictions.
There is another key distinction that makes the problem more complex than ordinary security protection: judging whether a model is executing a cyber attack can often be discerned from the input and output; but judging the nature of the attack and the specific context is a completely different matter, and this is precisely the judgment capability needed here. In many cases, this task is extremely difficult, or even impossible.
The "security layer" Palantir tried to sell us (I imagine they pitched a similar solution to OpenAI) is even worse. Our assessment is that this is almost entirely security theater.
Palantir's basic logic seems to be: "You might have some disgruntled employees in your company; you need to give them something to appease them, or make what's happening invisible to them. This is the service we provide."
As for the issue of having Anthropic or OpenAI employees directly supervise deployments, we also had internal discussions months ago when expanding the Acceptable Use Policy (AUP) for classified environments. The conclusion was very clear: this method is only feasible in very few cases. We will try our best, but it is by no means a core safeguard mechanism to rely on, especially in classified environments. By the way, we are indeed already doing this as much as possible; in this regard, we are no different from OpenAI.
Therefore, what I want to say is: the measures taken by OpenAI basically cannot solve the problem.
The essential reason they accept these solutions, while we do not, is: they are concerned with appeasing employees, while we genuinely care about preventing misuse.
These solutions are not without value; we use some of them ourselves, but they fall far short of the required security standards. At the same time, the War Department clearly did not treat OpenAI and us consistently.
In fact, we tried to include security terms similar to OpenAI's in our contract (as a supplement to the AUP. In our view, the AUP is the more important part), but the War Department refused. The evidence is in the email discussion chain from that time. As I am very busy now, I might ask a colleague to find the specific wording later. Therefore, the claim that "OpenAI's terms were offered to us and we refused" is not true; similarly, the claim that "OpenAI's terms can effectively prevent mass domestic surveillance or fully autonomous weapons" is also not true.
Furthermore, Sam and OpenAI's statements also imply that the red lines we proposed, namely fully autonomous weapons and mass domestic surveillance, are themselves illegal, making related use policies redundant. This rhetoric is almost completely consistent with the War Department's statements, seeming like they were coordinated in advance.
But this does not align with the facts.
As we explained in our statement yesterday, the War Department does indeed have the authority to conduct domestic surveillance. In the past, in the pre-AI era, the impact of these authorities was relatively limited, but in the AI era, their significance is completely different.
For example: The War Department can legally purchase large quantities of private data of U.S. citizens from suppliers (these suppliers typically obtain resale rights through obscure user consent clauses), then use AI to conduct large-scale analysis of this data to build citizen profiles, assess political tendencies, track movements in physical space—the data they can obtain even includes GPS information, etc.
Another point worth noting: near the end of the negotiations, the War Department proposed that if we deleted a specific clause in the contract regarding "analysis of bulk acquired data," they would be willing to accept all our other terms. And this clause was precisely the only one in the contract that accurately corresponded to the scenario we were most concerned about. We found this very suspicious.
On the issue of autonomous weapons, the War Department claims that "human-in-the-loop" is a legal requirement. But this is not the case. It is actually just a Pentagon policy from the Biden administration era, requiring human involvement in weapon launch decisions. And this policy can be unilaterally modified by the current Secretary of Defense, Pete Hegseth—this is what we are truly worried about. Therefore, from a practical perspective, this is not a real constraint.
The大量 (large amount of) public relations rhetoric from OpenAI and the War Department on these issues is either lying or deliberately creating confusion. These facts reveal a pattern of behavior, a pattern I have seen many times in Sam Altman. I hope everyone can recognize it.
This morning, he first stated that he agrees with Anthropic's red lines. The purpose of doing this is to appear supportive of us, thereby claiming some credit, while avoiding criticism when they take over this contract. He also tried to portray himself as someone who wants to "establish uniform contract standards for the entire industry"—playing the peacemaker and dealmaker.
But behind the scenes, he is signing a contract with the War Department, preparing to replace us the moment we are marked as a supply chain risk.
At the same time, he must ensure this process doesn't look like "OpenAI abandoned the bottom line while Anthropic stuck to its red lines." He can achieve this because:
First, he can sign all the "security theater" measures we refused, and the War Department and its partners are willing to cooperate, packaging these measures credibly enough to appease his employees.
Second, the War Department is willing to accept some terms he proposed, while they refused the same content when we proposed it.
It is these two points that allow OpenAI to reach an agreement, while we cannot.
The real reasons the War Department and the Trump administration dislike us are: we did not make political donations to Trump (while OpenAI and Greg Brockman donated a lot); we did not offer dictatorial praise for Trump (while Sam did); we support AI regulation, which goes against their policy agenda; we choose to tell the truth on many AI policy issues (e.g., AI's impact on job displacement); and, we indeed坚守 (adhered to) the red lines, rather than制造 (creating) "security theater" with them to appease employees.
Sam is now trying to describe all this as: we are difficult to work with, we are rigid, we lack flexibility, etc. I hope everyone recognizes that this is a classic case of gaslighting.
Vague statements like "someone is difficult to work with" are often used to cover up the真正难看的 (truly ugly) reasons—the ones I just mentioned: political donations, political loyalty, and security theater.
Everyone needs to understand this and refute this narrative when communicating privately with OpenAI employees.
In other words, Sam is undermining our position under the guise of "supporting us." I hope everyone remains清醒 (clear-headed) about this: he is making it easier for the government to punish us by weakening public support for us. I even suspect he might be暗中推波助澜 (secretly fanning the flames), although I currently have no direct evidence for this.
At the public and media level, this rhetoric and manipulation seem to have failed. Most people view OpenAI-War Department deal with caution, even unease, and see us as the principled party (by the way, we are now number two on the App Store download charts).
[Note: Claude later rose to number one on the App Store.]
Of course, this narrative has worked on some fools on Twitter, but that's not important. What I'm truly worried about is: ensuring it doesn't gain traction among OpenAI's own employees.
Due to selection effects, they are already a group relatively easy to persuade. But it is still very important to refute the narratives that Sam is currently peddling to his own employees.





