# Сопутствующие статьи по теме OpenAI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "OpenAI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

"The New Yorker investigation, based on internal documents and interviews with over 100 sources, reveals deep internal distrust in OpenAI’s leadership, particularly toward CEO Sam Altman. Key allegations include a pattern of dishonesty, undermining safety protocols, and prioritizing commercial interests over OpenAI’s original non-profit mission to develop AI safely. Chief Scientist Ilya Sutskever compiled a 70-page dossier accusing Altman of repeatedly lying to the board—for instance, falsely claiming GPT-4 features had passed safety reviews. Anthropic co-founder Dario Amodei’s private notes further detail how Microsoft’s investment deal effectively neutered OpenAI’s safety commitments. The report also highlights unfulfilled promises, such as allocating only 1-2% of promised computing resources to critical safety teams. Internal conflicts extend to CFO Sarah Friar, who opposed Altman’s aggressive IPO timeline amid financial concerns. Microsoft executives compared Altman to fraudsters like SBF, citing a tendency to distort facts and renege on agreements. Critics argue that Altman’s unchecked authority and alleged disregard for transparency pose significant risks given OpenAI’s powerful, potentially dangerous AI technology. The company’s transformation from a safety-first non-profit to a profit-driven entity raises fundamental questions about its governance and ethical commitments."

marsbit04/07 03:40

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

marsbit04/07 03:40

70-Page Confidential Document's First Allegation: 'Lying', Altman Told the Board 'I Can't Change My Character'

In a major investigation, Pulitzer winner Ronan Farrow and Andrew Marantz reveal two previously undisclosed documents: a ~70-page confidential file compiled by former OpenAI chief scientist Ilya Sutskever and over 200 pages of internal notes by Anthropic CEO Dario Amodei from his time at OpenAI. Sutskever’s file, which opens with the accusation that Sam Altman exhibited a "pattern of lying," alleges he misled executives and the board on safety protocols and corporate matters. Amodei’s notes similarly claim “the problem at OpenAI is Sam himself,” citing instances like Altman denying agreed-upon terms in Microsoft’s $1 billion deal. Key revelations include: - No written report was produced from the post-reinstatement independent investigation into Altman. - OpenAI’s superalignment team received only 1-2% of the promised computing resources, mostly on outdated clusters. - In 2018, executives considered a "National Plan" to auction AI tech to nations including China and Russia. - Microsoft executives expressed strong distrust toward Altman, with one comparing his risk profile to figures like Bernie Madoff. During a board call after his firing, Altman reportedly said, "I can’t change my personality," which a director interpreted as an admission of persistent dishonesty. Altman denies intentional deception, attributing his behavior to "well-intentioned adaptation" and conflict avoidance.

marsbit04/06 14:24

70-Page Confidential Document's First Allegation: 'Lying', Altman Told the Board 'I Can't Change My Character'

marsbit04/06 14:24

Two Acquisitions in One Day: OpenAI Buys 'Narrative', Anthropic Buys 'Barriers'

On April 2, OpenAI and Anthropic each announced an acquisition, reflecting their divergent strategies as both target an IPO by late 2026. OpenAI acquired tech talk show TBPN to shape public AI discourse and support its revenue base, which is 60% consumer-driven from ChatGPT subscriptions. In contrast, Anthropic purchased AI biotech startup Coefficient Bio for approximately $400 million in stock, continuing its focused strategy of deepening enterprise capabilities, particularly in high-switching-cost sectors like life sciences. Over the past three years, OpenAI completed 15 acquisitions across diverse fields including hardware, media, and healthcare, spending over $7.7 billion on disclosed deals, such as the $6.5 billion purchase of Jony Ive’s AI hardware firm. Anthropic made only three acquisitions, each precisely strengthening its product stack: Bun for coding infrastructure, Vercept for autonomous agents, and now Coefficient Bio for biotech R&D pipelines. Anthropic’s enterprise-focused revenue (80% of total) drives its strategy to lock in clients with vertical integration, as seen in its sequenced moves into life sciences and healthcare. Meanwhile, with a higher reliance on consumer subscriptions, OpenAI is investing in narrative influence—TBPN aims to boost ad revenue and steer public AI conversation. Both companies are on accelerated IPO paths: Anthropic eyeing a $60+ billion offering led by Goldman Sachs and JPMorgan, and OpenAI targeting a ~$1 trillion valuation. Their acquisitions underscore distinct priorities—Anthropic builds industry-specific moats, while OpenAI amplifies its public story.

marsbit04/03 10:07

Two Acquisitions in One Day: OpenAI Buys 'Narrative', Anthropic Buys 'Barriers'

marsbit04/03 10:07

Cursor vs. Anthropic and OpenAI: Thanks for Raising Me, Now I'm Here to Take the Market

Cursor, a VS Code plugin initially built on OpenAI's API, has transitioned from a dependent customer to a formidable competitor by launching its proprietary coding model, Composer 2. This model reportedly outperforms Claude Opus 4.6 on key benchmarks at one-tenth the cost. The case exemplifies a critical strategic dilemma in tech—when to open or close an API. The authors propose a framework: opening an API risks eroding a company’s moat if competitors can use it to bootstrap their own products and aggregate demand, eventually enabling vertical integration. This is especially risky in AI, where API outputs can directly improve a rival’s model training and product refinement—exactly what Cursor achieved by leveraging OpenAI and Anthropic models to gather user data and refine its own offering. Companies then face two choices: restrict API access (like Twitter, which closed its API to protect its social graph) or keep it open but find alternative moat, such as network effects or Lindy effects (like crypto protocols, e.g., Morpho). The authors predict that leading AI companies (like OpenAI and Anthropic) will likely restrict access to their most advanced models over time, as switching costs remain low, network effects are weak, and distillation techniques reduce training costs. This could stifle consumer AI innovation but create opportunities for open alternatives.

marsbit03/31 07:35

Cursor vs. Anthropic and OpenAI: Thanks for Raising Me, Now I'm Here to Take the Market

marsbit03/31 07:35

活动图片