The Person Building Robots for OpenAI Sees a Terrifying Future

marsbitPublished on 2026-03-09Last updated on 2026-03-09

Abstract

Caitlin Kalinowski, head of hardware and robotics engineering at OpenAI, resigned in March 2026 in protest against the company's contract with the U.S. Department of Defense, which she believed could enable domestic surveillance and autonomous weapon applications. Her departure came shortly after OpenAI signed a deal allowing the Pentagon to use its AI models in classified networks—a contract that rival Anthropic had previously refused on ethical grounds. The announcement triggered a #QuitGPT movement, causing a 295% surge in ChatGPT uninstalls and boosting Anthropic’s Claude to the top of app stores. Under public pressure, CEO Sam Altman revised the contract to include wording against "intentional" use in domestic surveillance, though experts noted legal loopholes remained. Kalinowski’s role involved developing physical AI systems, making her particularly concerned about the potential militarization of embodied AI. Her resignation reflects broader internal dissent at OpenAI, where ethics and safety teams have seen a 37% attrition rate due to disagreements over military use and company values. The situation highlights a growing tension in Silicon Valley between commercial expansion and ethical boundaries. While Anthropic chose principle over partnership—and gained user trust—OpenAI’s acceptance of the contract signals a strategic shift that risks alienating talent and compromising transparency. Kalinowski’s exit poses a fundamental question to the industry: How far are bui...

Author:Geek Old Friend

On March 7, 2026, when I saw the news of Caitlin Kalinowski's resignation, my first reaction was not shock, but—"Finally, someone is speaking with action."

Kalinowski was the head of hardware and robotics engineering at OpenAI, having joined in November 2024. She chose to leave in less than a year and a half.

Her reason was direct and weighty—she could not accept the potential domestic surveillance and autonomous weapons applications following OpenAI's contract with the U.S. Department of Defense.

This is not an ordinary loss of talent. This is someone who personally helped build the body of AI, resigning to tell the world: she is unwilling to be responsible for what the things she created might do.

To understand Kalinowski's departure, we must go back to what happened about a week earlier.

On February 28, Sam Altman announced that OpenAI had reached an agreement with the U.S. Department of Defense, allowing the Pentagon to use OpenAI's AI models in its classified networks. The news caused an uproar.

Interestingly, the "reference point" for this contract was the rival company Anthropic.

Just before this, Anthropic had rejected a similar proposal from the Pentagon, insisting on including stricter ethical safeguards in the contract. As a result, Defense Secretary Pete Hegseth directly criticized Anthropic on X, calling its behavior a "masterclass in arrogance and betrayal," and echoed the Trump administration's order to halt cooperation with Anthropic.

OpenAI then took over the deal

User reactions were intense. On February 28, the number of ChatGPT uninstallations surged by 295% compared to the previous day. The #QuitGPT movement quickly spread across social media, with over 2.5 million supporters of the digital boycott within three days. Claude took advantage of the situation to surpass ChatGPT as the top daily download in the U.S., reaching the top of the Apple App Store's free app chart.

Under pressure, Altman publicly admitted on March 3 that he "should not have rushed to launch this contract," calling it "opportunistic and hasty," and announced revised contract terms to clarify that "AI systems should not be intentionally used for domestic surveillance of U.S. personnel and citizens."

But the word "intentionally" itself is a loophole. A lawyer from the Electronic Frontier Foundation pointed out that intelligence and law enforcement agencies often rely on "incidental" or "commercially purchased" data to circumvent stronger privacy protections—adding 'intentionally' does not equate to a real restriction.

Kalinowski's resignation occurred against this backdrop.

01 What She Saw Was More Specific Than We Imagine

While most people were still debating whether OpenAI was compromising with the government, Kalinowski faced a more specific and brutal problem—her team was building robots.

Hardware and robotics engineering is not abstract work like writing code or tuning parameters. It is about giving AI hands, feet, and eyes. When OpenAI's cooperation with the Department of Defense extends from "model use" to potential future "embodied AI military applications," the nature of Kalinowski's work changes.

Researchers in the field of autonomous weapons have long warned of this day.

Current U.S. Department of Defense policies do not require autonomous weapons to obtain human approval before using force. In other words, the contract OpenAI signed does not technically prevent its models from becoming part of a system that "lets GPT decide to kill someone".

This is not alarmism. Jessica Tillipman, a lecturer in government procurement law at Georgetown University, analyzed OpenAI's revised contract and clearly stated that the wording "does not give OpenAI the freedom to prohibit legitimate government use in an Anthropic-style manner." It only states that the Pentagon cannot use OpenAI technology to violate "existing laws and policies"—but existing laws have significant gaps in regulating autonomous weapons.

Governance experts at Oxford University have made similar judgments, believing that OpenAI's agreement is "unlikely to弥补" the structural gaps in governance left by AI-driven domestic surveillance and autonomous weapon systems.

Kalinowski's departure is her personal response to this judgment.

02 What Is Happening Inside OpenAI

Kalinowski is not the first to leave, and she is likely not the last.

Data shows that the attrition rate in OpenAI's ethics team and AI safety team has reached 37%, with most people citing "misalignment with company values" or "inability to accept AI for military purposes" as their reasons for leaving. Research scientist Aidan McLaughlin wrote internally, "I personally don't think this deal is worth it."

It is worth noting the timing of this wave of departures—precisely when OpenAI is rapidly expanding its commercial footprint. Around the time of the defense contract controversy, the company announced an expansion of its existing $38 billion agreement with AWS by $100 billion over eight years; it also readjusted its publicly disclosed spending targets, expecting total revenue to exceed $280 billion by 2030.

Commercial acceleration, safety teams continuously leaving. This divergence is the most important coordinate for understanding OpenAI's current situation.

A company's values are ultimately reflected in who it retains and who it cannot retain. When those most concerned with "how this technology will be used" begin to leave, it is not difficult to infer the direction the remaining organizational structure will slide toward.

Anthropic chose another path in this game—rejecting the contract, enduring the Defense Department's anger, but winning the trust of many users. During that period, Claude's downloads increased against the trend, proving to some extent that a "principled refusal" is not necessarily a losing strategy commercially.

But Anthropic also paid a price—it was kicked out by the government, at least for now.

This is the dilemma: no choice is perfect.

Refusal means potentially losing influence, or even being excluded from rule-making. Acceptance means endorsing, with one's own technology, actions one cannot fully control.

Kalinowski's answer was a third path—leaving.

It was the most honest thing she could do.

03 The Battle for Silicon Valley's Soul Has Just Begun

If we zoom out, the significance of this event far exceeds one person's resignation.

The integration of AI and the military is a choice the entire industry will eventually have to face. The Pentagon has the budget, the needs, and the technical integration capabilities; it will not stop extending olive branches to AI companies. And AI companies—whether it's OpenAI pursuing AGI, Anthropic emphasizing safety, or other players—will eventually have to give their answers to this question.

Altman's strategy is to attempt to accept commercial reality while drawing bottom lines through contract wording. But as multiple legal and governance experts have pointed out, those wordings seem more like public relations protection than hard constraints at the technical level.

A more fundamental problem is that when AI models are deployed into classified networks and begin participating in military decision-making, the external world has no ability to verify whether those "guarantees" are actually being executed.

The lack of transparency is itself the greatest risk.

Kalinowski was at OpenAI for less than a year and a half but chose to leave at this juncture. She did not issue a lengthy public statement or criticize anyone by name; she simply used action to draw her own boundary.

In a sense, this is more powerful than any policy article.

AI hardware and robotics engineering was originally one of the most exciting frontiers in Silicon Valley. When Kalinowski left, she took away not just a resume, but also a question for everyone still in the industry—

How far are you willing to go to be responsible for the things you build?

Related Questions

QWho is Caitlin Kalinowski and why did she resign from OpenAI?

ACaitlin Kalinowski was the head of hardware and robotics engineering at OpenAI. She resigned due to her inability to accept the potential domestic surveillance and autonomous weapons applications that could arise from OpenAI's contract with the U.S. Department of Defense.

QWhat was the public and user reaction to OpenAI's contract with the Department of Defense?

AThe public reaction was highly negative. On the day of the announcement, ChatGPT uninstallations surged by 295%, the #QuitGPT movement spread across social media, and Claude surpassed ChatGPT in U.S. daily downloads, becoming the top free app on the Apple App Store.

QWhat specific concern did experts raise about the revised contract's wording regarding 'intentional' use?

AExperts, such as a lawyer from the Electronic Frontier Foundation, pointed out that adding the word 'intentional' was a loophole. Intelligence and law enforcement agencies often rely on 'incidental' or 'commercially acquired' data to circumvent stronger privacy protections, meaning the wording did not provide a real constraint.

QWhat is the significance of the high attrition rate in OpenAI's ethics and AI safety teams?

AThe attrition rate in OpenAI's ethics and AI safety teams has reached 37%, with many leaving due to conflicts with the company's values or an unwillingness to accept AI for military use. This indicates a internal cultural shift as the company prioritizes commercial expansion over its original safety and ethical principles.

QHow does the situation at OpenAI reflect a broader dilemma facing AI companies regarding government and military contracts?

AThe situation highlights a fundamental dilemma: refusing a contract might mean losing influence and being excluded from rule-making, while accepting it means endorsing uses of technology that the company cannot fully control. There is no perfect choice, as demonstrated by Anthropic's refusal costing them government business and OpenAI's acceptance leading to public backlash and internal dissent.

Related Reads

Understanding CPO (Co-Packaged Optics) in One Article: Why Nvidia Is Willing to Spend $3.2 Billion on a Fiber?

NVIDIA and Corning announced a multi-year strategic partnership on May 6, 2026, with NVIDIA committing up to $3.2 billion to support Corning's U.S. expansion. This investment will triple Corning's manufacturing plants and significantly boost its optical fiber and communications production capacity. The core driver behind this massive investment is the fundamental shift from copper to optical interconnect technology within AI data centers. As GPU clusters scale, copper wires face critical limitations: severe signal attenuation over distance, high energy consumption for signal integrity, and excessive heat generation. Optical fiber, transmitting light instead of electrical signals, solves these issues with minimal loss, near-light speed, and lower power needs. The article outlines a three-stage evolution of data center interconnect: 1. **Traditional Copper Interconnects:** The mainstream solution of the 2010s, now being phased out due to scaling bottlenecks. 2. **Pluggable Optical Modules:** The current mainstream, where modules convert electrical signals to light externally. This process still introduces energy loss and latency. 3. **CPO (Co-Packaged Optics):** The next-generation technology where the optical engine is integrated directly with the GPU chip package. This drastically reduces the electrical signal travel distance to mere millimeters, slashing power consumption and latency while boosting data density. NVIDIA CEO Jensen Huang has identified CPO as an essential core technology for AI infrastructure. NVIDIA's investment signifies a strategic shift from being a buyer to actively controlling its supply chain for critical components. With demand for specialized optical fiber far outstripping supply—evidenced by soaring prices—securing long-term manufacturing capacity has become a competitive necessity. While Corning's expansion may pressure some suppliers, a projected global fiber supply gap of 5-15% over the next few years creates a significant opportunity window, particularly for Chinese manufacturers competitive in optical preforms, chips, and modules. Ultimately, NVIDIA's move is not about chasing a trend but an engineering imperative. The transition to light-based interconnects like CPO is driven by the physical limits of copper, marking a definitive step in the ongoing AI computing revolution.

marsbit11m ago

Understanding CPO (Co-Packaged Optics) in One Article: Why Nvidia Is Willing to Spend $3.2 Billion on a Fiber?

marsbit11m ago

KOL's Perspective: Why Is SOL Set to Rise from This Point?

**Summary: Why SOL is Positioned for Growth at This Level** The article argues that SOL is poised for an upward move from its current price point, citing several key factors. Primarily, SOL has just broken out of a 4-month consolidation phase. This breakout signals a return of risk appetite to the broader crypto market, as SOL is seen as a key indicator of overall crypto health. The token's ownership has reportedly shifted from short-term traders and tourists to long-term accumulators, leading to low volume. Any meaningful increase in trading activity could thus trigger significant upward momentum. Fundamental strengths include strong institutional adoption, integration with DeFi and RWAs (Real-World Assets), and the potential benefits from the Clarity Act. Despite its high volatility—having dropped 70% from its all-time high but still up 12x from its bear market low—SOL is highlighted as one of the few tokens from the last cycle to reach new highs. It boasts a robust ecosystem of applications, users, and protocols. Future catalysts include the expected influx of AI developers following the Miami Accelerate conference, which focused on AI on Solana. Furthermore, Solana is positioned as the premier chain for memecoin activity, a trend expected to continue and drive network usage and fees. The article concludes that recent price action reflects a healthy transfer to long-term holders, setting the stage for growth.

marsbit1h ago

KOL's Perspective: Why Is SOL Set to Rise from This Point?

marsbit1h ago

Those Pre-Bitcoin PoW Protocols Have Recently Been Reimplemented

This article details a recent surge in replicating pre-Bitcoin Proof-of-Work (PoW) protocols, specifically focusing on Hal Finney's 2004 RPOW (Reusable Proofs of Work). Within five days in May 2026, multiple independent builders in the Bitcoin/cypherpunk community launched projects inspired by this early electronic cash proposal. The initiative began with Fred Krueger's `rpow2.com`, a centralized but auditable system that replaced RPOW's original IBM 4758 hardware with Ed25519 signatures. Initially a faithful replica, it later adopted Bitcoin-like features (21M supply cap, difficulty adjustment) and a controversial 5.24% founder allocation. This sparked rapid forks, including `rpow4.com` which incorporated full Bitcoin parameters, a prediction market (`rpowmarket.com`), and a DEX (`rpow2swap.com`). Concurrently, Mike In Space created a prototype of Wei Dai's 1998 b-money proposal (`b-money.replit.app`), pushing the historical exploration even further back. The article contrasts these centralized, server-dependent experiments with Bitcoin's core innovation of decentralized, trustless consensus. It also highlights a parallel development: the `HASH` project on Ethereum, which uses smart contract hooks to enable a purely fair-launch, browser-mineable PoW token with 0% allocations to team or VCs. The collective activity is framed as a meme-driven, educational exploration of cypherpunk history rather than a serious financial movement, with all projects heavily disclaiming any investment value.

marsbit1h ago

Those Pre-Bitcoin PoW Protocols Have Recently Been Reimplemented

marsbit1h ago

South Korean Exchanges 'Battle' Regulators, Challenging the Boundaries of Enforcement and Legislation

South Korea's cryptocurrency industry is engaged in a rare, direct confrontation with regulators. The Financial Intelligence Unit (FIU), the primary anti-money laundering (AML) watchdog, has recently imposed heavy penalties on major exchanges like Upbit and Bithumb for alleged violations involving unregistered overseas VASPs and AML procedures. However, exchanges are now actively challenging these actions in court and through industry associations. In a significant shift, the Seoul Administrative Court ruled in favor of Upbit's operator, Dunamu, overturning part of an FIU-ordered business suspension. The court found the FIU's penalty criteria and justification insufficiently clear. Similarly, the court suspended the enforcement of a six-month business suspension against Bithumb pending a final ruling, citing potential irreversible harm to the exchange. Beyond legal battles, the industry is contesting proposed legislative amendments. The Digital Asset eXchange Alliance (DAXA) strongly opposes a draft rule that would mandate Suspicious Transaction Reports (STRs) for all crypto transfers over 10 million KRW (~$6,800). DAXA argues this "poison pill" clause violates legal principles and would overwhelm the STR system, increasing reports from 63,000 to an estimated 5.45 million annually for major exchanges, thereby crippling effective AML monitoring. This conflict highlights a structural tension in South Korea's crypto governance: comprehensive digital asset laws are still developing, while regulators rely heavily on AML enforcement. The industry's move from passive compliance to active legal and legislative challenges signifies a new phase, pressing for clearer rules and more proportionate enforcement. While short-term disputes may intensify, this clash could ultimately lead to a more mature and sustainable regulatory framework for South Korea's vibrant crypto market.

marsbit1h ago

South Korean Exchanges 'Battle' Regulators, Challenging the Boundaries of Enforcement and Legislation

marsbit1h ago

Trading

Spot
Futures
活动图片