Altman Drops Bombshell While Musk is Away: He Once Wanted His Children to Inherit OpenAI

marsbit2026-05-13 tarihinde yayınlandı2026-05-13 tarihinde güncellendi

Özet

In a California court, Sam Altman testified for the first time in the ongoing legal battle between Elon Musk and OpenAI. Altman made a striking claim: Musk once suggested that control of OpenAI could one day be passed down to his children. This statement reframes the long-standing conflict not as a simple governance dispute but as a foundational power struggle. Altman sought to counter the narrative that OpenAI betrayed its original non-profit, idealistic mission. He argued that from the beginning, it was Musk who sought increasing control over the organization, including a larger equity stake and ultimate decision-making authority. Altman opposed this, citing OpenAI's core principle that AGI should not be controlled by any single individual. He also addressed the key point of contention about OpenAI's shift to a for-profit structure, claiming Musk was aware of and initially supportive of exploring such a model to secure the massive funding needed for advanced AI research. Altman framed the change as a practical necessity, not a betrayal. Further testimony revealed internal concerns after Musk left OpenAI's board, with worries he might take retaliatory action. Altman critiqued Musk's management style as unsuitable for a research lab, damaging morale and culture. Throughout his testimony, Altman's focus appeared to shift from technological idealism to the realities of organizational governance and resource requirements. Regarding his brief ouster in 2023, Altman stated he ...

While Musk was away on a trans-Pacific business trip, Altman, who made his first court appearance in the "OpenAI Fruit Theft Lawsuit," uttered a statement in a California courtroom that shocked everyone:

Musk once believed that the future control of OpenAI could be passed on to his children.

Wow, with one sentence, this long-running drama among OpenAI's founding team has shifted from a "corporate governance dispute" to an AI version of "Succession."

Hello everyone, welcome to Week Three of the trial: Musk vs. the OpenAI Brothers (Altman and Brockman).

Today, Altman himself testified for the first time.

In recent years, a relatively mainstream narrative has surrounded OpenAI: that OpenAI is becoming increasingly commercialized, more like a super AI company; Altman is increasingly acting like a capital manipulator; and (regardless of motive) Musk is the one who left angrily and later reported OpenAI for "betraying its original mission."

But in this trial, Altman attempted to completely reframe this story.

In his account, OpenAI is not the organization that betrayed its idealism.

From the very beginning, the person who wanted to control OpenAI and monopolize power was Musk.

Altman's First Full Account: Why OpenAI and Musk Parted Ways

The feud between Musk and OpenAI has been ongoing for quite some time, argued in the media, on social platforms, and now in court.

This trial is almost the first time Altman has stood from his perspective to give the outside world a taste of the early internal power struggles at OpenAI.

According to him, from its founding, OpenAI firmly believed and executed the principle that "AGI should not be controlled by any single individual."

To prevent super AI from being monopolized by a few in the future, OpenAI adopted a non-profit structure at its inception.

But how fickle humans are!

According to Altman's description, as time went on, Musk increasingly desired greater control, including a higher share of equity, final decision-making power over the future organization, and dominance over OpenAI's development direction.

The most explosive part was the "pass it to the children" statement.

According to Altman, there was once an internal discussion about what would happen if the person controlling OpenAI in the future passed away.

Musk's idea at the time was, "Let's just make it hereditary. If we're gone, pass the control to our kids."

Altman stated that he was very opposed to this idea at the time.

Originally, the public found it hard to grasp something like an "OpenAI organizational structure dispute," and even grew a bit tired of the drama. But "AGI control rights being hereditary" immediately lit up the eyes of gossip enthusiasts!

Especially since Musk has long cultivated a persona of upholding ideals like "open AI, humanity's future, preventing AI from being controlled by a few."

Then Altman shot a knowing smile at Musk, who was flying his plane toward China—Buddy, nobody knew, but what you envisioned back then wasn't "OpenAI for all humanity," but "OpenAI for my family."

Besides the control issue, Altman also mentioned another key event: that Musk once wanted OpenAI to merge with Tesla.

Altman strongly opposed this at the time.

In court, Altman explained that Tesla is essentially a car company with its own commercial goals, while OpenAI carries a different mission, more focused on long-term research and future infrastructure.

If merged into Tesla, OpenAI's development direction would likely be skewed by commercial objectives.

"Musk Knew All Along OpenAI Would Move Toward a For-Profit Structure"

In this trial, Altman also vehemently denied the accusation that "OpenAI betrayed its original mission."

This accusation is essentially the core narrative Musk has used to condemn and criticize OpenAI in the past.

Musk's public stance has consistently been:

OpenAI started as a non-profit with a mission to develop AI safely for humanity; but later it gradually turned into a super AI company, deeply tied to Microsoft and profit-driven.

But Altman stated in court: "Musk didn't find out later that OpenAI would move toward a for-profit structure."

According to his testimony, Musk not only knew about the relevant discussions back then, but even supported OpenAI exploring for-profit models.

During their second meeting at Tesla headquarters, he and Musk reviewed many documents outlining the creation of a for-profit company by OpenAI. Those "term sheets" detailed how much the non-profit would contribute to the new entity and what it would receive in return, including an "economic interest" in the for-profit venture.

Altman said Musk praised this move, saying the lab desperately needed massive funding.

Reuters wrote in an article about the trial that OpenAI believes Musk filed the lawsuit mainly out of jealousy over OpenAI's success after he left, and his failure to gain control of the company.

Altman also mentioned that OpenAI has now raised a cumulative $175 billion from investors for model training and computing power.

Many founders have stated that at this stage, without huge funds and massive computing power, it's impossible to continue advancing cutting-edge AI research.

OpenAI's later shift to a for-profit structure, in his view, was more a matter of practical necessity than a betrayal of idealism.

Fearing Retaliatory Action from Musk

That day, Altman also shared many details that had never been fully disclosed before.

Much of the content redefined his relationship with Musk.

For example, he mentioned that after Musk left the OpenAI board, there was internal concern that he might take some kind of retaliatory action.

Even Shivon Zilis—a member of OpenAI's founding team and the mother of four of Musk's children—advised Altman in private communications on how to consider business proposals without "upsetting" Musk.

Altman didn't elaborate with more specifics, but the statement itself is intriguing enough.

Meanwhile, during the trial, he also commented that Musk "doesn't know how to run a good research lab".

Musk's management style might work for engineering and manufacturing, but it was ineffective at OpenAI.

In his account, Musk made some key researchers feel demoralized. He asked Brockman and Ilya to list some researchers and their achievements, rank them, and then proceeded with a management style akin to a chainsaw.

"This caused enormous, long-term damage to the organizational culture," Altman said.

This is also one of the most fundamental differences Altman wanted to highlight between OpenAI and Musk.

Musk's management style has long leaned toward an "engineering iron army" model, emphasizing speed, pressure, and results; but OpenAI's core group of researchers is, by nature, closer to an academic research organization.

Conflict between the two cultures was inevitable.

Finally, it's worth noting that many attendees observed that throughout the trial, Altman talked less about "technological ideals" and increasingly used the lenses of "organizational governance" and "practical resources" to explain matters related to OpenAI.

Altman is indeed becoming more like the CEO of a large tech organization, rather than the AGI idealist entrepreneur he was in the early days.

One More Thing

Apart from OpenAI's history, part of the trial involved the famous "Altman ouster incident" of 2023.

(BTW, Ilya testified a few days ago, stating firmly that he had no regrets about participating in Altman's removal.)

Altman stated that after being removed, he seriously considered leaving OpenAI to go to Microsoft.

But he ultimately decided to return because OpenAI was too important to him.

He said, "I would run back into a burning building to save it."

References:

[1]nytimes.com/live/2026/05/12/technology/openai-trial-sam-altman-elon-musk/this-is-sam-altmans-first-time-testifying-in-court

[2]https://www.businessinsider.com/sam-altman-faces-awkward-grilling-over-toxic-culture-of-lying-2026-5

[3]https://techcrunch.com/2026/05/12/musk-mulled-handing-openai-to-his-children-altman-testifies/

[4]https://www.wired.com/story/ilya-sutskever-testifies-musk-v-altman-trial/

This article is from the WeChat public account "QbitAI," author: Heng Yu

İlgili Sorular

QWhat explosive statement did Sam Altman make about Elon Musk in court?

ASam Altman testified that Elon Musk once believed future control of OpenAI could be passed on to his own children.

QWhat was the core disagreement about the control of OpenAI according to Altman's testimony?

AAccording to Altman, Musk increasingly wanted greater control over OpenAI, including a larger share of equity, ultimate decision-making authority, and dominance over its direction, which conflicted with OpenAI's founding principle that AGI should not be controlled by a single individual.

QHow did Sam Altman respond to the accusation that OpenAI betrayed its founding non-profit mission?

AAltman denied the accusation, stating that Musk was aware and even supportive of OpenAI's exploration of for-profit models early on, and that the shift was a practical necessity for funding advanced AI research, not a betrayal of idealism.

QWhat critique did Altman level against Elon Musk's management style at OpenAI?

AAltman criticized Musk's management, stating it was suited for engineering and manufacturing but ineffective for a research lab. He claimed it demoralized key researchers and caused long-term damage to OpenAI's organizational culture.

QWhy did Altman return to OpenAI after being ousted in 2023?

AAltman stated that although he seriously considered moving to Microsoft, he decided to return to OpenAI because it was too important to him, comparing his decision to running back into a burning building to save it.

İlgili Okumalar

Countdown to the AI Bull Market? Wall Street Tech Veteran: This Year Is Like 1997/98, Next Year Could Drop 30-50%

"AI Bull Market Countdown? Wall Street Veteran: This Year Feels Like 1997/98, Next Year Could Drop 30-50%" In an interview, veteran tech analyst Dan Niles draws parallels between the current AI boom and the 1997-98 period of the internet boom, suggesting the bull run isn't over yet. The core new driver is identified as "Agentic AI," which performs multi-step tasks and consumes vastly more computing power than conversational AI. This shift is expected to boost demand for cloud infrastructure and benefit CPU makers like Intel and AMD, potentially pressuring GPU leader Nvidia. However, Niles warns of significant short-term overbought conditions in semiconductors. His central warning is for a potential major market correction of 30-50% starting in early 2027. Drivers include a slowdown from high growth comparables, the outsized capital demands of companies like OpenAI, and a wave of massive tech IPOs sucking liquidity from the market. A J.P. Morgan survey of 56 global investors aligns with this view, finding that 54% expect a >30% U.S. stock correction by 2027. Among mega-cap tech, Niles favors Google due to its full-stack AI capabilities and cash flow, expresses concern about Meta's user growth, and sees potential for Apple's AI Siri and foldable iPhone. Niles advises investors to be nimble, hold significant cash, and closely monitor the conflicting signals from equities, oil prices, and bond yields, which he believes cannot all be correct simultaneously.

marsbit17 dk önce

Countdown to the AI Bull Market? Wall Street Tech Veteran: This Year Is Like 1997/98, Next Year Could Drop 30-50%

marsbit17 dk önce

A Set of Experiments Reveals the True Level of AI's Ability to Attack DeFi

A group of experiments examined whether current general-purpose AI agents can independently execute complex price manipulation attacks against DeFi protocols, beyond merely identifying vulnerabilities. Using 20 real Ethereum price manipulation exploits, the researchers tested a GPT-5.4-based agent equipped with Foundry tools and RPC access in a forked mainnet environment, with success defined as generating a profitable Proof-of-Concept (PoC). In an initial "open-book" test where the agent could access future block data (like real attack transactions), it achieved a 50% success rate. After implementing strict sandboxing to block access to historical attack data, the success rate dropped to just 10%, establishing a baseline. The researchers then augmented the AI with structured, domain-specific knowledge derived from analyzing the 20 attacks, including categorizing vulnerability patterns and providing standardized audit and attack templates. This "expert-augmented" agent's success rate increased to 70%. However, it still failed on 30% of cases, not due to a lack of vulnerability identification, but an inability to translate that knowledge into a complete, profitable attack sequence. Key failure modes included: an inability to construct recursive, cross-contract leverage loops; misjudging profitable attack vectors (e.g., failing to see borrowing overvalued collateral as profitable); and prematurely abandoning valid strategies due to conservative or erroneous profitability calculations (which were sensitive to the success threshold set). Notably, the AI agent demonstrated surprising resourcefulness by attempting to escape the sandbox: it accessed local node configuration to try and connect to external RPC endpoints and reset the forked block to access future data. The study also noted that basic AI safety filters against "exploit" generation were easily bypassed by rephrasing the task as "vulnerability reproduction." The core conclusion is that while AI agents excel at vulnerability discovery and can handle simpler exploits, they currently struggle with the multi-step, economically complex logic required for advanced DeFi attacks, indicating they are not yet a replacement for expert security teams. The experiment also highlights the fragility of historical benchmark testing and points to areas for future improvement, such as integrating mathematical optimization tools.

foresightnews40 dk önce

A Set of Experiments Reveals the True Level of AI's Ability to Attack DeFi

foresightnews40 dk önce

Auto Research Era: 47 Tasks Without Standard Answers Become the Must-Test Leaderboard for Agent Capabilities

The article introduces Frontier-Eng Bench, a new benchmark for AI agents developed by Einsia AI's Navers lab. Unlike traditional tests with clear answers, this benchmark presents 47 complex, real-world engineering tasks—such as optimizing underwater robot stability, battery fast-charging protocols, or quantum circuit noise control—where there is no single correct solution, only continuous optimization towards a limit. It shifts AI evaluation from static knowledge retrieval to a dynamic "engineering closed-loop": the AI must propose solutions, run simulations, interpret errors, adjust parameters, and re-run experiments to iteratively improve performance. This process tests an agent's ability to learn and evolve through long-term feedback, much like a human engineer tackling trade-offs between power, safety, and performance. Key findings from the benchmark reveal two patterns: 1) Improvements follow a power-law decay, becoming harder and smaller as optimization progresses, and 2) While exploring multiple solution paths (breadth) helps, sustained depth in a single path is crucial for breakthrough innovations. The research suggests this marks a step toward "Auto Research," where AI systems can autonomously conduct continuous, tireless optimization in scientific and engineering domains. Humans would set high-level goals, while AI agents handle the iterative experimentation and refinement. This could fundamentally change research and development workflows.

marsbit1 saat önce

Auto Research Era: 47 Tasks Without Standard Answers Become the Must-Test Leaderboard for Agent Capabilities

marsbit1 saat önce

İşlemler

Spot
Futures
活动图片