The Person Building Robots for OpenAI Sees a Terrifying Future

marsbit发布于2026-03-09更新于2026-03-09

文章摘要

Caitlin Kalinowski, head of hardware and robotics engineering at OpenAI, resigned in March 2026 in protest against the company's contract with the U.S. Department of Defense, which she believed could enable domestic surveillance and autonomous weapon applications. Her departure came shortly after OpenAI signed a deal allowing the Pentagon to use its AI models in classified networks—a contract that rival Anthropic had previously refused on ethical grounds. The announcement triggered a #QuitGPT movement, causing a 295% surge in ChatGPT uninstalls and boosting Anthropic’s Claude to the top of app stores. Under public pressure, CEO Sam Altman revised the contract to include wording against "intentional" use in domestic surveillance, though experts noted legal loopholes remained. Kalinowski’s role involved developing physical AI systems, making her particularly concerned about the potential militarization of embodied AI. Her resignation reflects broader internal dissent at OpenAI, where ethics and safety teams have seen a 37% attrition rate due to disagreements over military use and company values. The situation highlights a growing tension in Silicon Valley between commercial expansion and ethical boundaries. While Anthropic chose principle over partnership—and gained user trust—OpenAI’s acceptance of the contract signals a strategic shift that risks alienating talent and compromising transparency. Kalinowski’s exit poses a fundamental question to the industry: How far are bui...

Author:Geek Old Friend

On March 7, 2026, when I saw the news of Caitlin Kalinowski's resignation, my first reaction was not shock, but—"Finally, someone is speaking with action."

Kalinowski was the head of hardware and robotics engineering at OpenAI, having joined in November 2024. She chose to leave in less than a year and a half.

Her reason was direct and weighty—she could not accept the potential domestic surveillance and autonomous weapons applications following OpenAI's contract with the U.S. Department of Defense.

This is not an ordinary loss of talent. This is someone who personally helped build the body of AI, resigning to tell the world: she is unwilling to be responsible for what the things she created might do.

To understand Kalinowski's departure, we must go back to what happened about a week earlier.

On February 28, Sam Altman announced that OpenAI had reached an agreement with the U.S. Department of Defense, allowing the Pentagon to use OpenAI's AI models in its classified networks. The news caused an uproar.

Interestingly, the "reference point" for this contract was the rival company Anthropic.

Just before this, Anthropic had rejected a similar proposal from the Pentagon, insisting on including stricter ethical safeguards in the contract. As a result, Defense Secretary Pete Hegseth directly criticized Anthropic on X, calling its behavior a "masterclass in arrogance and betrayal," and echoed the Trump administration's order to halt cooperation with Anthropic.

OpenAI then took over the deal

User reactions were intense. On February 28, the number of ChatGPT uninstallations surged by 295% compared to the previous day. The #QuitGPT movement quickly spread across social media, with over 2.5 million supporters of the digital boycott within three days. Claude took advantage of the situation to surpass ChatGPT as the top daily download in the U.S., reaching the top of the Apple App Store's free app chart.

Under pressure, Altman publicly admitted on March 3 that he "should not have rushed to launch this contract," calling it "opportunistic and hasty," and announced revised contract terms to clarify that "AI systems should not be intentionally used for domestic surveillance of U.S. personnel and citizens."

But the word "intentionally" itself is a loophole. A lawyer from the Electronic Frontier Foundation pointed out that intelligence and law enforcement agencies often rely on "incidental" or "commercially purchased" data to circumvent stronger privacy protections—adding 'intentionally' does not equate to a real restriction.

Kalinowski's resignation occurred against this backdrop.

01 What She Saw Was More Specific Than We Imagine

While most people were still debating whether OpenAI was compromising with the government, Kalinowski faced a more specific and brutal problem—her team was building robots.

Hardware and robotics engineering is not abstract work like writing code or tuning parameters. It is about giving AI hands, feet, and eyes. When OpenAI's cooperation with the Department of Defense extends from "model use" to potential future "embodied AI military applications," the nature of Kalinowski's work changes.

Researchers in the field of autonomous weapons have long warned of this day.

Current U.S. Department of Defense policies do not require autonomous weapons to obtain human approval before using force. In other words, the contract OpenAI signed does not technically prevent its models from becoming part of a system that "lets GPT decide to kill someone".

This is not alarmism. Jessica Tillipman, a lecturer in government procurement law at Georgetown University, analyzed OpenAI's revised contract and clearly stated that the wording "does not give OpenAI the freedom to prohibit legitimate government use in an Anthropic-style manner." It only states that the Pentagon cannot use OpenAI technology to violate "existing laws and policies"—but existing laws have significant gaps in regulating autonomous weapons.

Governance experts at Oxford University have made similar judgments, believing that OpenAI's agreement is "unlikely to弥补" the structural gaps in governance left by AI-driven domestic surveillance and autonomous weapon systems.

Kalinowski's departure is her personal response to this judgment.

02 What Is Happening Inside OpenAI

Kalinowski is not the first to leave, and she is likely not the last.

Data shows that the attrition rate in OpenAI's ethics team and AI safety team has reached 37%, with most people citing "misalignment with company values" or "inability to accept AI for military purposes" as their reasons for leaving. Research scientist Aidan McLaughlin wrote internally, "I personally don't think this deal is worth it."

It is worth noting the timing of this wave of departures—precisely when OpenAI is rapidly expanding its commercial footprint. Around the time of the defense contract controversy, the company announced an expansion of its existing $38 billion agreement with AWS by $100 billion over eight years; it also readjusted its publicly disclosed spending targets, expecting total revenue to exceed $280 billion by 2030.

Commercial acceleration, safety teams continuously leaving. This divergence is the most important coordinate for understanding OpenAI's current situation.

A company's values are ultimately reflected in who it retains and who it cannot retain. When those most concerned with "how this technology will be used" begin to leave, it is not difficult to infer the direction the remaining organizational structure will slide toward.

Anthropic chose another path in this game—rejecting the contract, enduring the Defense Department's anger, but winning the trust of many users. During that period, Claude's downloads increased against the trend, proving to some extent that a "principled refusal" is not necessarily a losing strategy commercially.

But Anthropic also paid a price—it was kicked out by the government, at least for now.

This is the dilemma: no choice is perfect.

Refusal means potentially losing influence, or even being excluded from rule-making. Acceptance means endorsing, with one's own technology, actions one cannot fully control.

Kalinowski's answer was a third path—leaving.

It was the most honest thing she could do.

03 The Battle for Silicon Valley's Soul Has Just Begun

If we zoom out, the significance of this event far exceeds one person's resignation.

The integration of AI and the military is a choice the entire industry will eventually have to face. The Pentagon has the budget, the needs, and the technical integration capabilities; it will not stop extending olive branches to AI companies. And AI companies—whether it's OpenAI pursuing AGI, Anthropic emphasizing safety, or other players—will eventually have to give their answers to this question.

Altman's strategy is to attempt to accept commercial reality while drawing bottom lines through contract wording. But as multiple legal and governance experts have pointed out, those wordings seem more like public relations protection than hard constraints at the technical level.

A more fundamental problem is that when AI models are deployed into classified networks and begin participating in military decision-making, the external world has no ability to verify whether those "guarantees" are actually being executed.

The lack of transparency is itself the greatest risk.

Kalinowski was at OpenAI for less than a year and a half but chose to leave at this juncture. She did not issue a lengthy public statement or criticize anyone by name; she simply used action to draw her own boundary.

In a sense, this is more powerful than any policy article.

AI hardware and robotics engineering was originally one of the most exciting frontiers in Silicon Valley. When Kalinowski left, she took away not just a resume, but also a question for everyone still in the industry—

How far are you willing to go to be responsible for the things you build?

相关问答

QWho is Caitlin Kalinowski and why did she resign from OpenAI?

ACaitlin Kalinowski was the head of hardware and robotics engineering at OpenAI. She resigned due to her inability to accept the potential domestic surveillance and autonomous weapons applications that could arise from OpenAI's contract with the U.S. Department of Defense.

QWhat was the public and user reaction to OpenAI's contract with the Department of Defense?

AThe public reaction was highly negative. On the day of the announcement, ChatGPT uninstallations surged by 295%, the #QuitGPT movement spread across social media, and Claude surpassed ChatGPT in U.S. daily downloads, becoming the top free app on the Apple App Store.

QWhat specific concern did experts raise about the revised contract's wording regarding 'intentional' use?

AExperts, such as a lawyer from the Electronic Frontier Foundation, pointed out that adding the word 'intentional' was a loophole. Intelligence and law enforcement agencies often rely on 'incidental' or 'commercially acquired' data to circumvent stronger privacy protections, meaning the wording did not provide a real constraint.

QWhat is the significance of the high attrition rate in OpenAI's ethics and AI safety teams?

AThe attrition rate in OpenAI's ethics and AI safety teams has reached 37%, with many leaving due to conflicts with the company's values or an unwillingness to accept AI for military use. This indicates a internal cultural shift as the company prioritizes commercial expansion over its original safety and ethical principles.

QHow does the situation at OpenAI reflect a broader dilemma facing AI companies regarding government and military contracts?

AThe situation highlights a fundamental dilemma: refusing a contract might mean losing influence and being excluded from rule-making, while accepting it means endorsing uses of technology that the company cannot fully control. There is no perfect choice, as demonstrated by Anthropic's refusal costing them government business and OpenAI's acceptance leading to public backlash and internal dissent.

你可能也喜欢

比特币已实现市值回升至正值区域,市场重获力量

比特币价格在周日小幅反弹后重回8万美元关键点位上方,多个指标开始重新显现强势。其中,比特币已实现市值(Realized Cap)随着市场状况缓慢改善,近期已转为看涨信号。 比特币重新燃起的看涨势头正逐渐体现在多个关键链上指标中,反映出市场动态的转变。比特币已实现市值目前显示出强势,随着市场情绪改善,已回升至正值区域。该指标通过计算已实现利润与已实现亏损的差值得出,反映了比特币市场创造或摧毁的价值。 CryptoQuant平台分析师Darkfost指出,该指标目前正显示复苏信号,这意味着资金正流入比特币。截至周日,比特币已实现市值已转正,增长率约为+0.25%。虽然增幅尚不显著,但这是在今年2月经历超过-2.6%的急剧下跌之后发生的。Darkfost认为,当前阶段代表了资产从“弱手”向“强手”的转移。 与此同时,另一个关键指标比特币净已实现利润/亏损也已转为正值。这一变化表明,以盈利状态转移的代币数量超过了以亏损状态转移的数量,显示出市场信心和投资者情绪正在稳步改善。链上分析账户On-Chain Mind指出,该指标是五个多月以来首次转正。 总体而言,这些链上指标的改善标志着市场正在经历一个修复过程,投资者情绪好转,资金开始回流。然而,这并不等同于直接进入牛市,趋势能否持续仍有待观察。

bitcoinist2小时前

比特币已实现市值回升至正值区域,市场重获力量

bitcoinist2小时前

BTC市场脉搏:第20周

比特币在过去一周从77,000美元高位震荡上行至82,000美元低位,买盘持续吸纳回调,尽管价格在局部高点附近动能有所减弱。现货CVD(累计成交量Delta)大幅上升,反映了强烈的看涨情绪和对价格上涨的高度信心。同时,现货交易量增加,表明近期的价格走势得到了更强投资者参与的推动。然而,价格动能的放缓指向更均衡的买卖压力,暗示市场可能进入一个稳定阶段。 期货市场方面,风险偏好同样上升。期货未平仓合约增加,表明投机活动加剧和风险承担意愿增强;永续合约CVD飙升,显示持续的看涨动能。但多头资金费率下降,意味着空头兴趣抬头,看涨情绪可能正在减弱。 期权市场对下行保护的需求下降,未平仓合约上升,表明市场预期转向中性偏多。然而,波动率利差大幅扩大,显示期权定价蕴含的风险显著高于已实现波动,反映出参与者中存在较高的不确定性。 链上活动显著增强,每日活跃地址、实体调整后的转账量和总手续费收入均有所上升,指向用户参与度提高和网络活动增加。与此同时,流动性状况持续稳定,短期投机资本的减少降低了即时卖压,而已实现市值变化则显示适度的净资本流入。 盈利能力指标也有所改善,市场从未实现亏损重回盈利状态。然而,处于盈利状态的供应百分比仍低于通常与大规模获利了结相关的水平,表明市场乐观情绪依然克制而非狂热。 总结来说,比特币的市场结构继续改善,得到更强的链上活动、更健康的盈利能力和更稳定的持有者仓位的支持。虽然看涨基调正在形成,但较温和的资本流入和谨慎的市场情绪表明,市场对风险偏好的变化依然敏感。

insights.glassnode4小时前

BTC市场脉搏:第20周

insights.glassnode4小时前

交易

现货
合约
活动图片