According to monitoring by 1M AI News, Pulitzer Prize winner Ronan Farrow and The New Yorker journalist Andrew Marantz published a long-form investigative report based on interviews with over 100 informed sources, fully disclosing for the first time two core documents: a roughly 70-page confidential file compiled in the fall of 2023 by former OpenAI Chief Scientist Ilya Sutskever, and over 200 pages of internal notes accumulated by Anthropic CEO Dario Amodei during his tenure at OpenAI. Both documents had never been made public before.
Sutskever's confidential file contains Slack messages, HR documents, and screenshots taken with a mobile phone (reportedly to avoid company device monitoring). It begins with a list: "Sam exhibits a persistent pattern of...", with the first item being "lying". The document accuses Altman of misrepresenting facts to executives and the board and deceiving colleagues on safety processes. Sutskever reportedly told another board member at the time: "I don't think Sam is the one who should have his hand on the button."
Amodei's notes, titled "My Experience at OpenAI" (subtitle "Private Document, Do Not Share"), circulated among Silicon Valley peers but were never publicly released. They state "The problem with OpenAI is Sam himself" and accuse Altman of当面 denying terms that already existed in the contract when signing the $1 billion investment agreement with Microsoft, continuing to deny them even after Amodei read them out verbatim on the spot.
The report also reveals several previously undisclosed facts:
1. The independent investigation promised after Altman's reinstatement never resulted in a written report. The law firm WilmerHale (which led investigations into Enron and WorldCom) responsible for the investigation only provided oral briefings to two new board members. The decision not to produce a written report was partly based on advice from these two board members' personal lawyers. Informed sources described the investigation as "appearing designed to limit transparency," with some current board members believing the matter could lead to "a need for re-investigation."
2. The Superalignment team received actual computing power of about 1%-2% of the publicly committed 20%, with most allocated to "the oldest clusters with the worst chips." When journalists requested interviews with researchers working on existential safety, an OpenAI representative replied: "What do you mean by 'existential safety'? That's not a thing."
3. Around 2018, the executive team seriously discussed a plan internally referred to as the "National Plan": having major powers (including China and Russia) bid to purchase AI technology. The then Head of Policy, Jack Clark, described its goal as "creating a prisoner's dilemma where all countries would have to give us funding." The plan was shelved after several employees threatened to resign.
4. Multiple Microsoft executives expressed strong dissatisfaction with Altman. One executive stated, "He misrepresents, distorts, renegotiates, and breaches agreements," believing "there is a small but real possibility that he will eventually be remembered by people like the mastermind of the Ponzi scheme Bernie Madoff or FTX founder Sam Bankman-Fried."
During a call with the board after being fired, Altman was asked to admit to his pattern of deception. He repeatedly said, "This is ridiculous," and then stated, "I can't change my character." A board member present interpreted this as: "That sentence means 'I have a trait of lying to people, and I won't stop.'" Aaron Swartz, a programmer from Y Combinator's first batch who passed away in 2013, had previously warned friends: "You have to understand that Sam can never be trusted. He is a sociopath and will do anything." The report notes that more than one person voluntarily used the term "sociopath" in interviews.
In over a dozen conversations with the journalists, Altman denied intentional deception, characterizing the constantly changing promises as "good-faith adaptations" to a rapidly changing environment, and attributing early criticism to his tendency to be "overly conflict-averse." When asked if running an AI company demanded higher standards of integrity, he added: "Yes, it demands a higher level of integrity, and I feel the weight of that responsibility every day."







