From Housemate Arguments to a $300 Billion Showdown: WSJ Long-Form Article First Reveals the Decade-Long Personal Feud Between Anthropic and OpenAI Founders

marsbit2026-03-28 tarihinde yayınlandı2026-03-28 tarihinde güncellendi

Özet

The Wall Street Journal reveals the decade-long personal rift between Anthropic and OpenAI founders, Dario and Daniela Amodei, and OpenAI's Sam Altman and Greg Brockman. The conflict, rooted in philosophical differences over AI development and governance, began in 2016. Tensions escalated over leadership disputes, credit attribution, and management styles, culminating in the Amodeis and nearly a dozen employees leaving OpenAI in late 2020 to form Anthropic. Today, both companies are valued over $300 billion, competing fiercely while their founders' unresolved personal animosity continues to shape the global AI landscape.

Wall Street Journal reporter Keach Hagey published a long-form investigative report, systematically disclosing for the first time, through extensive interviews with current and former employees and people close to executives at both companies, the decade-long personal feud between the founders of Anthropic and OpenAI. What shaped the global AI landscape was not just a battle over technological roadmaps, but also a never-healed personal wound.

Dario Amodei's rhetoric internally in recent months has been far more intense than in public. He compared Sam Altman's legal dispute with Elon Musk to "Hitler vs. Stalin," called OpenAI President Greg Brockman's $2.5 million donation to a pro-Trump super PAC "evil," and likened OpenAI and other competitors to "tobacco companies selling products they know are harmful."

After the Pentagon dispute escalated, he wrote on Slack calling OpenAI "mendacious," stating, "These facts indicate a pattern of behavior I have seen repeatedly in Sam Altman."

Internally, Anthropic refers to this branding strategy as creating a "healthy alternative" to its competitor. An ad during this year's Super Bowl, which implicitly mocked OpenAI for embedding ads in its chatbot, is a public manifestation of this strategy.

The story begins in the living room of a shared house on Delano Street in San Francisco in 2016. Dario and his sister Daniela Amodei lived there, and OpenAI co-founder Brockman often visited due to his personal friendship with Daniela. One day, Brockman, Dario, and Daniela's then-fiancé, effective altruism philanthropist Holden Karnofsky, sat together arguing about the right path for AI development: Brockman believed all Americans should be informed about what was happening at the AI frontier, while Dario and Karnofsky believed sensitive information should be reported to the government first, not broadcast to the public. This disagreement later became the philosophical dividing line between the two companies.

Impressed by OpenAI's talent roster, Dario joined in mid-2016, staying up late with Brockman training AI agents to play video games. But over four years of working together, conflicts deepened around power and a sense of belonging. In 2017, Musk, OpenAI's main funder at the time, demanded a list of each employee's contributions and conducted layoffs based on it. About 10% to 20% of the roughly 60-person team were fired one by one. Dario saw this as cruel; one of those laid off later became an Anthropic co-founder.

That same year, an ethics advisor hired by Dario proposed that OpenAI act as a coordinating entity between AI companies and the government. Brockman extrapolated from this the idea of "selling AGI to the nuclear powers on the UN Security Council." Dario considered this近乎叛国 (close to treasonous) and once considered resigning.

After Musk exited in 2018, Altman took over leadership. He and Dario agreed that employees lacked confidence in the leadership of Brockman and Chief Scientist Ilya Sutskever. Dario stayed on the condition that the two would no longer be his supervisors, but soon discovered that Altman had simultaneously promised the latter two the authority to fire him—two contradictory promises.

After the development of the GPT series began, the most intense conflict among executives erupted over who could work on the language model project. Dario, then research director, barred Brockman from involvement. Daniela, who co-led the project with Alec Radford, threatened to resign as lead. Radford's personal wishes were caught in a proxy war among the executives.

Dario's seniority grew with the success of GPT-2 and GPT-3, but he felt Altman downplayed his contributions. He was angry when Brockman went on a podcast to discuss the OpenAI charter, feeling his greater contribution to the charter warranted an invitation; he was similarly displeased to learn that Brockman and Altman were meeting former President Obama but excluded him.

The conflict came to a head in a confrontational meeting. Altman called the Amodei siblings into a conference room, accusing them of encouraging colleagues to submit negative feedback about him to the board. They denied it. Altman said the information came from another executive. Daniela immediately called that executive in to confront them, and the person said they knew nothing about it.

Altman then denied having said that. A fierce argument ensued. In early 2020, Altman asked executives to write peer reviews for each other. Brockman wrote a strongly worded review accusing Daniela of abusing power and using bureaucratic processes to exclude dissenters; Altman previewed it and called it "tough but fair." Daniela rebutted it point by point. The argument escalated to the point where Brockman once proposed withdrawing the review.

At the end of 2020, the team centered around Dario decided to leave, with Daniela leading negotiations with lawyers regarding their departure. Altman personally went to Dario's home to persuade him to stay. Dario proposed reporting directly only to the board and explicitly stated he could not work with Brockman. Before leaving, he wrote a long memo dividing AI companies into "market-oriented" and "public benefit-oriented" types, suggesting the ideal mix was 75% public benefit, 25% market. Weeks later, Dario, Daniela, and nearly a dozen employees left OpenAI to found Anthropic.

Five years later, both companies are valued at over $300 billion and are racing to be the first to IPO. During the group photo at the closing of the AI summit in New Delhi this February, Indian Prime Minister Modi and the tech leaders present raised their hands high. Amodei and Altman chose not to participate, only awkwardly bumping elbows.

İlgili Sorular

QWhat was the core philosophical disagreement about AI development between Dario Amodei and Greg Brockman in 2016?

AThe core disagreement was about how to disseminate information about AI advancements. Greg Brockman believed the information should be broadcast to the entire American public, while Dario Amodei and Holden Karnofsky argued that sensitive information should be reported to the government first, not broadcast publicly.

QWhat internal brand strategy does Anthropic use to position itself against OpenAI?

AAnthropic's internal brand strategy is to position itself as a 'healthy alternative' to its competitors, specifically OpenAI. This was exemplified by a Super Bowl ad that implicitly criticized OpenAI for placing ads in its chatbot.

QWhat major event involving Elon Musk at OpenAI did Dario Amodei view as 'cruel'?

ADario Amodei viewed Elon Musk's directive in 2017 to rank every OpenAI employee by contribution and subsequently lay off 10% to 20% of the roughly 60-person team as a 'cruel' event.

QWhat was the final ultimatum Dario Amodei gave to Sam Altman in an attempt to stay at OpenAI before leaving?

ADario Amodei's final ultimatum was that he would only stay at OpenAI if he reported directly to the board and made it clear that he could not work with Greg Brockman.

QHow did Dario Amodei categorize AI companies in a memo written just before leaving OpenAI, and what was his proposed ideal ratio?

AIn his memo, Dario Amodei categorized AI companies into 'market-driven' and 'public benefit' types. He proposed that the ideal mix for a company should be 75% public benefit and 25% market-driven.

İlgili Okumalar

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

An open-source AI security tool flagged critical risks in Kelp DAO’s cross-chain architecture 12 days before a $292 million exploit on April 18, 2026—the largest DeFi incident of the year. The vulnerability was not in the smart contracts but in the configuration of LayerZero’s cross-chain bridge: a 1-of-1 Decentralized Verifier Network (DVN) setup allowed an attacker to forge cross-chain messages with a single compromised node. The tool, which performs AI-assisted architectural risk assessments using public data, identified several unremediated risks, including opaque DVN configuration, single-point-of-failure across 16 chains, unverified cross-chain governance controls, and similarities to historical bridge attacks like Ronin and Harmony. It also noted the absence of an insurance pool, which amplified losses as Aave and other protocols absorbed nearly $300M in bad debt. The attack unfolded over 46 minutes: the attacker minted 116,500 rsETH on Ethereum via a fraudulent message, used it as collateral to borrow WETH on lending platforms, and laundered funds through Tornado Cash. While an emergency pause prevented two subsequent attacks worth ~$200M, the damage was severe. The tool’s report, committed to GitHub on April 6, scored Kelp DAO a medium-risk 72/100—later acknowledged as too lenient. It failed to query on-chain DVN configurations or initiate private disclosure, highlighting gaps in current DeFi security approaches that focus on code audits but miss config-level and governance risks. The incident underscores the need for independent, AI-powered risk assessment tools that evaluate protocol architecture, not just code.

marsbit2 saat önce

An Open-Source AI Tool That No One Saw Predicted Kelp DAO's $292 Million Vulnerability 12 Days Ago

marsbit2 saat önce

İşlemler

Spot
Futures
活动图片