The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

marsbitPubblicato 2026-04-07Pubblicato ultima volta 2026-04-07

Introduzione

"The New Yorker investigation, based on internal documents and interviews with over 100 sources, reveals deep internal distrust in OpenAI’s leadership, particularly toward CEO Sam Altman. Key allegations include a pattern of dishonesty, undermining safety protocols, and prioritizing commercial interests over OpenAI’s original non-profit mission to develop AI safely. Chief Scientist Ilya Sutskever compiled a 70-page dossier accusing Altman of repeatedly lying to the board—for instance, falsely claiming GPT-4 features had passed safety reviews. Anthropic co-founder Dario Amodei’s private notes further detail how Microsoft’s investment deal effectively neutered OpenAI’s safety commitments. The report also highlights unfulfilled promises, such as allocating only 1-2% of promised computing resources to critical safety teams. Internal conflicts extend to CFO Sarah Friar, who opposed Altman’s aggressive IPO timeline amid financial concerns. Microsoft executives compared Altman to fraudsters like SBF, citing a tendency to distort facts and renege on agreements. Critics argue that Altman’s unchecked authority and alleged disregard for transparency pose significant risks given OpenAI’s powerful, potentially dangerous AI technology. The company’s transformation from a safety-first non-profit to a profit-driven entity raises fundamental questions about its governance and ethical commitments."

Original Author: Xiaobing, Deep Tide TechFlow

In the fall of 2023, OpenAI Chief Scientist Ilya Sutskever sat in front of his computer and completed a 70-page document.

This document was compiled from Slack message logs, HR communication records, and internal meeting minutes, all to answer one question: Sam Altman, the person in charge of what might be the most dangerous technology in human history, can he really be trusted?

Sutskever's answer was written on the first line of the first page, under the list title "Sam exhibits a consistent pattern of behavior..."

First item: Lying.

Two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive investigative report in The New Yorker. They interviewed over 100 individuals involved, obtained previously undisclosed internal memos, and accessed over 200 pages of private notes left by Anthropic founder Dario Amodei from his time at OpenAI. The story pieced together from these documents is far uglier than the 2023 "power struggle": how OpenAI transformed from a nonprofit organization dedicated to human safety step by step into a commercial machine, with almost every safety guardrail dismantled by the same person.

Amodei's conclusion in his notes was more blunt: "The problem with OpenAI is Sam himself."

OpenAI's "Original Sin" Setup

To understand the weight of this report, one must first grasp how unique OpenAI is as a company.

In 2015, Altman and a group of Silicon Valley elites did something almost unprecedented in business history: using a nonprofit organization to develop what could be the most powerful technology in human history. The board's duty was clearly stated: safety comes before the company's success, even before the company's survival. Simply put, if one day OpenAI's AI becomes dangerous, the board is obligated to shut down the company themselves.

The entire structure bets on one assumption: the person in charge of AGI must be an extremely honest person.

What if they bet wrong?

The core bombshell of the report is that 70-page document. Sutskever doesn't play office politics; he is one of the world's top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman was consistently telling falsehoods to executives and the board.

A specific example: In December 2022, Altman assured the board at a meeting that multiple features of the upcoming GPT-4 had passed safety reviews. Board member Toner asked to see the approval documents, only to find that two of the most controversial features (user-customizable fine-tuning and personal assistant deployment) had never received approval from the safety panel.

Something even more outrageous happened in India. An employee reported the "violation" to another board member: Microsoft had not completed the necessary safety reviews before launching an early version of ChatGPT in India ahead of schedule.

Sutskever also recorded another incident in his memo: Altman once told former CTO Mira Murati that the safety approval process wasn't that important and that the company's general counsel had endorsed it. Murati went to confirm with the general counsel, who replied: "I don't know where Sam got that impression."

Amodei's 200 Pages of Private Notes

Sutskever's document reads like a prosecutor's indictment. The over 200 pages of notes left by Amodei are more like a diary written by an eyewitness at the crime scene.

During his years as head of safety at OpenAI, Amodei watched the company retreat step by step under commercial pressure. He noted a key detail of the 2019 Microsoft investment deal in his notes: he had inserted a "merger and assistance" clause into the OpenAI charter,大意是, if another company found a safer path to AGI, OpenAI should stop competing and help that company. This was the safety guarantee he valued most in the entire deal.

Just before the deal was signed, Amodei discovered something: Microsoft had obtained veto power over this clause. What does that mean? Even if one day a competitor truly found a better path, Microsoft could block OpenAI's obligation to assist with a single word. The clause remained on paper, but it was effectively void from the day it was signed.

Amodei later left OpenAI and founded Anthropic. The competition between the two companies is fundamentally a divergence on "how AI be developed".

The Vanished 20% Compute Promise

There is a detail in the report that sends chills down the spine, concerning OpenAI's "Superalignment team".

In mid-2023, Altman emailed a PhD student at Berkeley researching "deceptive alignment" (where AI behaves well in tests but does its own thing after deployment), saying he was very concerned about this issue and considering a $10 billion global research prize. The student was encouraged, took a leave of absence, and joined OpenAI.

Then Altman changed his mind: no external prize, instead forming a "Superalignment team" internally. The company announced with great fanfare that it would allocate "20% of its existing compute" to this team, potentially worth over $10 billion. The announcement wording was extremely serious, stating that if the alignment problem is not solved, AGI could lead to "human disempowerment or even human extinction".

Jan Leike, appointed to lead this team, later told reporters that the promise itself was a very effective "talent retention tool".

Reality? Four people who worked on or were closely involved with this team said the actual compute allocated was only 1% to 2% of the company's total compute, and it was the oldest hardware. This team was later disbanded, its purpose unfulfilled.

When reporters requested interviews with OpenAI staff responsible for "existential safety" research, the company's PR response was laughable: "That's not an... actual thing that exists."

Altman himself was candid. He told reporters that his "intuition doesn't align with a lot of traditional AI safety stuff", and that OpenAI would still do "safety projects, or at least projects adjacent to safety".

The Sidelined CFO and the Impending IPO

The New Yorker report was only half the bad news that day. On the same day, The Information broke another bombshell: There is a serious disagreement between OpenAI's CFO Sarah Friar and Altman.

Friar privately told colleagues that she didn't think OpenAI was ready to go public this year. Two reasons: the procedural and organizational workload was too large, and the financial risk from Altman's promised $600 billion in compute spending over 5 years was too high. She wasn't even sure if OpenAI's revenue growth could support these promises.

But Altman wants to push for an IPO in the fourth quarter of this year.

Even more离谱的是, Friar no longer reports directly to Altman. As of August 2025, she reports to Fidji Simo (OpenAI's Applied AI CEO). And Simo just took sick leave last week due to health reasons. Consider this situation: a company rushing towards an IPO, the CEO and CFO have fundamental disagreements, the CFO doesn't report to the CEO, and the CFO's superior is on leave.

Even executives inside Microsoft couldn't stand it, saying Altman "distorts facts, goes back on his word, and constantly overturns agreed-upon agreements". One Microsoft executive even stated: "I think there's a non-zero chance he ends up being remembered as a Bernie Madoff or SBF-level fraudster."

Altman's "Two-Faced" Portrait

A former OpenAI board member described two traits in Altman to the reporter. This passage is perhaps the most scathing character sketch in the entire report.

This director said Altman has an extremely rare combination of traits: in every face-to-face interaction, he has an intense desire to please the other person and be liked by them. Simultaneously, he has a near-sociopathic indifference to the consequences of deceiving others.

These two traits appearing together in one person is extremely rare. But for a salesman, it's the perfect天赋.

There's a good analogy in the report: Jobs was famous for his "reality distortion field," he could make the world believe his vision. But even Jobs never told customers "if you don't buy my MP3 player, the people you love will die".

Altman has said similar things, about AI.

Why a CEO's Character Problem is Everyone's Risk

If Altman were just the CEO of an ordinary tech company, these allegations would be, at best, a juicy business gossip. But OpenAI is not ordinary.

By its own account, it is developing what could be the most powerful technology in human history. It can reshape the global economy and labor market (OpenAI just released a policy white paper on AI causing unemployment), and it can also be used to create large-scale biological weapons or launch cyberattacks.

All the safety guardrails are in name only. The founders' nonprofit mission has given way to the IPO rush. The former chief scientist and former head of safety both deem the CEO "untrustworthy". Partners compare the CEO to SBF. In this situation, how can this CEO unilaterally decide when to release AI models that could change the fate of humanity?

Gary Marcus (NYU AI professor, long-time AI safety advocate) wrote one sentence after reading the report: If some future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, are you really comfortable letting Altman alone decide whether to release it?

OpenAI's response to The New Yorker was concise: "Much of this article recycles previously reported events, using anonymous claims and selective anecdotes, with sources clearly having personal agendas."

A very Altman response: doesn't address specific allegations, doesn't deny the authenticity of the memos, only questions the motives.

A Money Tree Grows on the Corpse of Nonprofit

OpenAI's decade can be summarized in a story outline like this:

A group of idealists worried about AI risks created a mission-driven nonprofit organization. The organization made extraordinary technological breakthroughs. The breakthroughs attracted massive capital. Capital requires returns. The mission began to give way. Safety teams were disbanded. Those who questioned were purged. The nonprofit structure was changed to a for-profit entity. The board that once had the power to shut down the company is now filled with the CEO's allies. The company that once promised to dedicate 20% of its compute to safeguarding human safety now has PR people saying "that's not an actual thing that exists".

The protagonist of the story, labeled with the same tag by over a hundred firsthand witnesses: "unconstrained by the truth."

He is preparing to take this company public with a valuation exceeding $850 billion.

This article synthesizes information from public reports by The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.

Domande pertinenti

QWhat was the main finding of Ilya Sutskever's 70-page document regarding Sam Altman?

AThe document concluded that Sam Altman exhibited a consistent pattern of behavior, with the first and primary issue being lying. It alleged that Altman repeatedly provided false information to executives and the board.

QWhat specific safety commitment did OpenAI fail to fulfill according to the 'superalignment team' example?

AOpenAI publicly committed to allocating 20% of its computing power to the 'superalignment team' for AI safety research, but in reality, only provided 1% to 2% of its total computing power, using the oldest hardware. The team was later disbanded without completing its mission.

QWhy did Dario Amodei leave OpenAI to found Anthropic, as revealed in his private notes?

AAmodei left due to a fundamental disagreement on how AI should be developed. A key reason was the discovery that Microsoft had obtained veto power over a crucial 'merger and assist' clause in OpenAI's charter, which he had inserted as a safety guarantee, effectively rendering it useless after the investment deal was signed.

QWhat major internal conflict is reported between Sam Altman and OpenAI's CFO, Sarah Friar?

ACFO Sarah Friar privately expressed that OpenAI was not ready for an IPO in 2024 due to the immense procedural and organizational workload and the high financial risk from Altman's promised $600 billion compute spending. In contrast, Altman wanted to push for an IPO in Q4. Furthermore, Friar was no longer reporting directly to Altman, creating a dysfunctional reporting structure.

QHow does the article characterize the fundamental risk posed by Sam Altman's leadership at OpenAI?

AThe article argues that because OpenAI is developing potentially the most powerful technology in human history, which could reshape the global economy or be weaponized, the fact that its CEO is deemed 'untrustworthy' by key former executives and has dismantled safety guardrails poses an existential risk. It questions his unilateral power to decide when to release such powerful AI models.

Letture associate

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

An open-source AI security tool flagged critical risks in Kelp DAO’s cross-chain architecture 12 days before a $292 million exploit on April 18, 2026—the largest DeFi incident of the year. The vulnerability was not in the smart contracts but in the configuration of LayerZero’s cross-chain bridge: a 1-of-1 Decentralized Verifier Network (DVN) setup allowed an attacker to forge cross-chain messages with a single compromised node. The tool, which performs AI-assisted architectural risk assessments using public data, identified several unremediated risks, including opaque DVN configuration, single-point-of-failure across 16 chains, unverified cross-chain governance controls, and similarities to historical bridge attacks like Ronin and Harmony. It also noted the absence of an insurance pool, which amplified losses as Aave and other protocols absorbed nearly $300M in bad debt. The attack unfolded over 46 minutes: the attacker minted 116,500 rsETH on Ethereum via a fraudulent message, used it as collateral to borrow WETH on lending platforms, and laundered funds through Tornado Cash. While an emergency pause prevented two subsequent attacks worth ~$200M, the damage was severe. The tool’s report, committed to GitHub on April 6, scored Kelp DAO a medium-risk 72/100—later acknowledged as too lenient. It failed to query on-chain DVN configurations or initiate private disclosure, highlighting gaps in current DeFi security approaches that focus on code audits but miss config-level and governance risks. The incident underscores the need for independent, AI-powered risk assessment tools that evaluate protocol architecture, not just code.

marsbit1 h fa

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

marsbit1 h fa

Trading

Spot
Futures
活动图片