The Person Building Robots for OpenAI Sees a Terrifying Future

marsbitPublicado em 2026-03-09Última atualização em 2026-03-09

Resumo

Caitlin Kalinowski, head of hardware and robotics engineering at OpenAI, resigned in March 2026 in protest against the company's contract with the U.S. Department of Defense, which she believed could enable domestic surveillance and autonomous weapon applications. Her departure came shortly after OpenAI signed a deal allowing the Pentagon to use its AI models in classified networks—a contract that rival Anthropic had previously refused on ethical grounds. The announcement triggered a #QuitGPT movement, causing a 295% surge in ChatGPT uninstalls and boosting Anthropic’s Claude to the top of app stores. Under public pressure, CEO Sam Altman revised the contract to include wording against "intentional" use in domestic surveillance, though experts noted legal loopholes remained. Kalinowski’s role involved developing physical AI systems, making her particularly concerned about the potential militarization of embodied AI. Her resignation reflects broader internal dissent at OpenAI, where ethics and safety teams have seen a 37% attrition rate due to disagreements over military use and company values. The situation highlights a growing tension in Silicon Valley between commercial expansion and ethical boundaries. While Anthropic chose principle over partnership—and gained user trust—OpenAI’s acceptance of the contract signals a strategic shift that risks alienating talent and compromising transparency. Kalinowski’s exit poses a fundamental question to the industry: How far are bui...

Author:Geek Old Friend

On March 7, 2026, when I saw the news of Caitlin Kalinowski's resignation, my first reaction was not shock, but—"Finally, someone is speaking with action."

Kalinowski was the head of hardware and robotics engineering at OpenAI, having joined in November 2024. She chose to leave in less than a year and a half.

Her reason was direct and weighty—she could not accept the potential domestic surveillance and autonomous weapons applications following OpenAI's contract with the U.S. Department of Defense.

This is not an ordinary loss of talent. This is someone who personally helped build the body of AI, resigning to tell the world: she is unwilling to be responsible for what the things she created might do.

To understand Kalinowski's departure, we must go back to what happened about a week earlier.

On February 28, Sam Altman announced that OpenAI had reached an agreement with the U.S. Department of Defense, allowing the Pentagon to use OpenAI's AI models in its classified networks. The news caused an uproar.

Interestingly, the "reference point" for this contract was the rival company Anthropic.

Just before this, Anthropic had rejected a similar proposal from the Pentagon, insisting on including stricter ethical safeguards in the contract. As a result, Defense Secretary Pete Hegseth directly criticized Anthropic on X, calling its behavior a "masterclass in arrogance and betrayal," and echoed the Trump administration's order to halt cooperation with Anthropic.

OpenAI then took over the deal

User reactions were intense. On February 28, the number of ChatGPT uninstallations surged by 295% compared to the previous day. The #QuitGPT movement quickly spread across social media, with over 2.5 million supporters of the digital boycott within three days. Claude took advantage of the situation to surpass ChatGPT as the top daily download in the U.S., reaching the top of the Apple App Store's free app chart.

Under pressure, Altman publicly admitted on March 3 that he "should not have rushed to launch this contract," calling it "opportunistic and hasty," and announced revised contract terms to clarify that "AI systems should not be intentionally used for domestic surveillance of U.S. personnel and citizens."

But the word "intentionally" itself is a loophole. A lawyer from the Electronic Frontier Foundation pointed out that intelligence and law enforcement agencies often rely on "incidental" or "commercially purchased" data to circumvent stronger privacy protections—adding 'intentionally' does not equate to a real restriction.

Kalinowski's resignation occurred against this backdrop.

01 What She Saw Was More Specific Than We Imagine

While most people were still debating whether OpenAI was compromising with the government, Kalinowski faced a more specific and brutal problem—her team was building robots.

Hardware and robotics engineering is not abstract work like writing code or tuning parameters. It is about giving AI hands, feet, and eyes. When OpenAI's cooperation with the Department of Defense extends from "model use" to potential future "embodied AI military applications," the nature of Kalinowski's work changes.

Researchers in the field of autonomous weapons have long warned of this day.

Current U.S. Department of Defense policies do not require autonomous weapons to obtain human approval before using force. In other words, the contract OpenAI signed does not technically prevent its models from becoming part of a system that "lets GPT decide to kill someone".

This is not alarmism. Jessica Tillipman, a lecturer in government procurement law at Georgetown University, analyzed OpenAI's revised contract and clearly stated that the wording "does not give OpenAI the freedom to prohibit legitimate government use in an Anthropic-style manner." It only states that the Pentagon cannot use OpenAI technology to violate "existing laws and policies"—but existing laws have significant gaps in regulating autonomous weapons.

Governance experts at Oxford University have made similar judgments, believing that OpenAI's agreement is "unlikely to弥补" the structural gaps in governance left by AI-driven domestic surveillance and autonomous weapon systems.

Kalinowski's departure is her personal response to this judgment.

02 What Is Happening Inside OpenAI

Kalinowski is not the first to leave, and she is likely not the last.

Data shows that the attrition rate in OpenAI's ethics team and AI safety team has reached 37%, with most people citing "misalignment with company values" or "inability to accept AI for military purposes" as their reasons for leaving. Research scientist Aidan McLaughlin wrote internally, "I personally don't think this deal is worth it."

It is worth noting the timing of this wave of departures—precisely when OpenAI is rapidly expanding its commercial footprint. Around the time of the defense contract controversy, the company announced an expansion of its existing $38 billion agreement with AWS by $100 billion over eight years; it also readjusted its publicly disclosed spending targets, expecting total revenue to exceed $280 billion by 2030.

Commercial acceleration, safety teams continuously leaving. This divergence is the most important coordinate for understanding OpenAI's current situation.

A company's values are ultimately reflected in who it retains and who it cannot retain. When those most concerned with "how this technology will be used" begin to leave, it is not difficult to infer the direction the remaining organizational structure will slide toward.

Anthropic chose another path in this game—rejecting the contract, enduring the Defense Department's anger, but winning the trust of many users. During that period, Claude's downloads increased against the trend, proving to some extent that a "principled refusal" is not necessarily a losing strategy commercially.

But Anthropic also paid a price—it was kicked out by the government, at least for now.

This is the dilemma: no choice is perfect.

Refusal means potentially losing influence, or even being excluded from rule-making. Acceptance means endorsing, with one's own technology, actions one cannot fully control.

Kalinowski's answer was a third path—leaving.

It was the most honest thing she could do.

03 The Battle for Silicon Valley's Soul Has Just Begun

If we zoom out, the significance of this event far exceeds one person's resignation.

The integration of AI and the military is a choice the entire industry will eventually have to face. The Pentagon has the budget, the needs, and the technical integration capabilities; it will not stop extending olive branches to AI companies. And AI companies—whether it's OpenAI pursuing AGI, Anthropic emphasizing safety, or other players—will eventually have to give their answers to this question.

Altman's strategy is to attempt to accept commercial reality while drawing bottom lines through contract wording. But as multiple legal and governance experts have pointed out, those wordings seem more like public relations protection than hard constraints at the technical level.

A more fundamental problem is that when AI models are deployed into classified networks and begin participating in military decision-making, the external world has no ability to verify whether those "guarantees" are actually being executed.

The lack of transparency is itself the greatest risk.

Kalinowski was at OpenAI for less than a year and a half but chose to leave at this juncture. She did not issue a lengthy public statement or criticize anyone by name; she simply used action to draw her own boundary.

In a sense, this is more powerful than any policy article.

AI hardware and robotics engineering was originally one of the most exciting frontiers in Silicon Valley. When Kalinowski left, she took away not just a resume, but also a question for everyone still in the industry—

How far are you willing to go to be responsible for the things you build?

Perguntas relacionadas

QWho is Caitlin Kalinowski and why did she resign from OpenAI?

ACaitlin Kalinowski was the head of hardware and robotics engineering at OpenAI. She resigned due to her inability to accept the potential domestic surveillance and autonomous weapons applications that could arise from OpenAI's contract with the U.S. Department of Defense.

QWhat was the public and user reaction to OpenAI's contract with the Department of Defense?

AThe public reaction was highly negative. On the day of the announcement, ChatGPT uninstallations surged by 295%, the #QuitGPT movement spread across social media, and Claude surpassed ChatGPT in U.S. daily downloads, becoming the top free app on the Apple App Store.

QWhat specific concern did experts raise about the revised contract's wording regarding 'intentional' use?

AExperts, such as a lawyer from the Electronic Frontier Foundation, pointed out that adding the word 'intentional' was a loophole. Intelligence and law enforcement agencies often rely on 'incidental' or 'commercially acquired' data to circumvent stronger privacy protections, meaning the wording did not provide a real constraint.

QWhat is the significance of the high attrition rate in OpenAI's ethics and AI safety teams?

AThe attrition rate in OpenAI's ethics and AI safety teams has reached 37%, with many leaving due to conflicts with the company's values or an unwillingness to accept AI for military use. This indicates a internal cultural shift as the company prioritizes commercial expansion over its original safety and ethical principles.

QHow does the situation at OpenAI reflect a broader dilemma facing AI companies regarding government and military contracts?

AThe situation highlights a fundamental dilemma: refusing a contract might mean losing influence and being excluded from rule-making, while accepting it means endorsing uses of technology that the company cannot fully control. There is no perfect choice, as demonstrated by Anthropic's refusal costing them government business and OpenAI's acceptance leading to public backlash and internal dissent.

Leituras Relacionadas

Trading

Spot
Futuros
活动图片