A Four-Page Internal Letter: What Card Is OpenAI Playing?

marsbitPublished on 2026-04-14Last updated on 2026-04-14

Abstract

OpenAI's internal memo, revealed by The Information, outlines a strategic narrative against Anthropic across three key areas: revenue accounting, enterprise competition, and compute capacity. First, OpenAI CRO Denise Dresser challenged Anthropic’s reported $30B annualized revenue, claiming the actual net figure—using OpenAI’s accounting method—is $22B. The discrepancy stems from differing GAAP interpretations: Anthropic books gross revenue (including cloud partner shares), while OpenAI records net revenue after partner deductions. Second, enterprise adoption data from Ramp shows Anthropic rapidly closing the gap with OpenAI, narrowing from an 11% to a 4.6% difference within months. Anthropic already leads in high-value sectors like tech, finance, and professional services. Dresser acknowledged Anthropic’s edge in coding capabilities but warned against being a "single-product company" in a platform war. Third, while current compute capacity is comparable (OpenAI ~1.9 GW vs. Anthropic ~1.4 GW), OpenAI’s long-term plans aim for 30 GW by 2030—four times Anthropic’s projected 7-8 GW by 2027. Anthropic’s growth depends on sustaining enterprise revenue to cover rising cloud costs, estimated to reach $6.4B by 2027. The memo also highlighted OpenAI’s strategic shift: reducing reliance on Microsoft (which “limited customer reach”) and partnering with Amazon, which invests in both OpenAI and Anthropic. This places Amazon’s Bedrock platform as a battleground where both models compet...

According to Anthropic's books, its annualized revenue is $30 billion, but by OpenAI's conversion, the same set of sales figures is only worth $22 billion. Neither number is fabricated. This is the first cut thrown by OpenAI's Chief Revenue Officer, Denise Dresser, in a four-page internal letter exposed by the media on April 13.

The starting point of the matter is an employee memo obtained by The Information. In the letter, Dresser did three things simultaneously: praised the new Amazon collaboration as having "astoundingly high demand," admitted that the Microsoft partnership "has limited our reach to customers," and spent considerable篇幅 deconstructing Anthropic's revenue figures. The timing of this letter's leak coincides with just one week after Anthropic announced breaking the $30 billion annualized revenue milestone.

Superficially an internal company memo, it is实质上 a carefully constructed information war. To understand it, it's most direct to approach it from three dimensions: revenue口径, the competitive landscape on the enterprise side, and the compute arms race路线, then place them all within the same cloud partnership structure diagram.

Where does the $8 billion accounting gap come from

Anthropic reports $30 billion in annualized revenue; OpenAI says the actual figure is $22 billion. The $8 billion difference stems from the截然不同的 choices the two companies made in revenue recognition口径.

Anthropic uses gross accounting: when a company purchases usage credits for Claude through AWS, Anthropic records the full amount of this money as top-line revenue, then treats the platform share paid to Amazon as a cost. OpenAI does the opposite: it only records the net amount it actually receives from Microsoft, Microsoft's share does not enter the top line.

Both methods comply with U.S. Generally Accepted Accounting Principles (GAAP). Anthropic's logic is that it is the "principal" in customer transactions, with cloud vendors merely being distribution pipelines. OpenAI's logic is that it treats Microsoft as an "agent," booking only the portion that actually reaches its hands. The root of the divergence lies not in who is fabricating numbers, but in who more aggressively asserting their dominant position in the sales chain.

Dresser wrote in the memo that Anthropic "uses an accounting method that makes the revenue figure appear larger," including booking the full gross amount of shares from AWS and Google into top-line revenue. The subtext of this statement is not hard to understand: when Anthropic submits its S-1 prospectus to the SEC, auditors will rule on this口径, and届时 it may need to make adjusted disclosures using a unified口径. Converted to the same口径, Anthropic is $22 billion, OpenAI is $24 billion, and the领先方 has switched places.

It needs to be stated that Anthropic's revenue growth rate itself is already historic. According to data from Bloomberg and Sacra等 media, its annualized revenue grew from about $9 billion at the end of Q4 2025 to the current $30 billion, more than tripling in less than five months, and this is primarily driven by real customer procurement, not something explainable by accounting口径 adjustments. The core of this accounting controversy is not that Anthropic is shrinking, but that OpenAI is using the "口径" knife to redraw boundaries.

The catch-up speed on the enterprise side is faster than most people anticipated

Ramp tracks the actual AI spending behavior of thousands of companies on its platform, making it a first-hand data source for judging real choices on the enterprise side.

Ramp AI Index April data: Anthropic's share among enterprise paying customers rose to 30.6%, OpenAI's is 35.2%, the gap narrowed from 11 percentage points in February to 4.6 percentage points. Based on Anthropic's average monthly increase of +6.3 percentage points over the past two months (which itself is already the largest single-month increase record for this metric), it will overtake OpenAI on this metric in approximately two months.

More notably are the structural signals. In three high-purchasing-power industries, Anthropic's lead has become a fact: Information Technology/Software (63% vs. 54%), Financial Services (52% vs. 46%), and Professional Services (47% vs. 44%) all exceed OpenAI. These three industries happen to be the areas where enterprise AI budgets are most concentrated and procurement decisions are most professional. This means that the companies with the most say in the AI purchasing chain have already collectively begun leaning towards Anthropic.

Dresser罕有地承认ed in the memo that Anthropic "holds a significant lead among enterprise customers," citing programming capabilities. This statement, coming from within OpenAI, carries a weight completely different from external evaluations; it is one company telling its own employees internally that the opponent has won on the core battlefield. She simultaneously added a warning: "You do not want to be a single-product company in a platform war." This is提醒ing employees that Claude's advantage in programming, if it cannot extend to the platform layer, is ultimately just a ticket, not a boarding pass.

Compute gap: Similar today, fourfold by 2030

Compute capacity is the hardest competitive dimension for AI companies to shorten in the short term because its construction cycle is measured in years, and its funding threshold is measured in tens of billions.

Current numbers seem close: OpenAI约 1.9 gigawatts (GW), Anthropic约 1.4 GW, a difference of about 35%. Dresser described Anthropic in the memo as "operating on a meaningfully smaller curve," but this statement isn't particularly exaggerated in the current capacity comparison; the gap is real, just not yet decisive.

The real fork is after 2027. OpenAI plans to reach 30 GW of compute by 2030, backed by a $30 billion five-year cloud computing contract with Oracle, the entire Stargate infrastructure project, and a total construction commitment of $1.4 trillion.

Anthropic's path relies on a Broadcom custom chip agreement with a capacity of 3.5 GW, deployed through Google Cloud, effective from 2027,加上 existing training clusters on AWS, targeting 7-8 GW by the end of 2027.

Even if Anthropic fully delivers on its 2027 target, there remains a fourfold gap between it and OpenAI's 2030 plan. This chasm is not technically insurmountable; if improvements in model efficiency can make each unit of compute yield more收益, Anthropic could make good enough products with less compute.

But it must do so under the premise that Claude's momentum on the enterprise side continues, using sustained subscription revenue to support its compute procurement costs:据 Sacra estimates, Anthropic will pay cloud partners about $1.9 billion this year, rising to about $6.4 billion in 2027.

Amazon, betting on two competitors simultaneously

The most intriguing sentence in this memo is Dresser's direct characterization of the Microsoft partnership, writing that it "has also limited our ability to reach enterprises where they are."

OpenAI's move towards Amazon is already very clear:据 CNBC reported, in February this year, Amazon announced a $50 billion investment in OpenAI,同时 obtaining the exclusive third-party cloud distribution rights for OpenAI's enterprise Agent management platform, Frontier.

This is an active switch from the Microsoft轨道 to the Amazon轨道. The logic behind it is straightforward: many enterprise customers' AI infrastructure is already built on AWS's Bedrock platform, and Microsoft's exclusivity条款 make it difficult for OpenAI to sell there directly.

But the other side of Amazon's role in this competition is equally noteworthy: it is currently Anthropic's largest cloud infrastructure partner and strategic investor, with cumulative investments of $8 billion. Their collaborative Project Rainier cluster deploys about 500,000 Trainium 2 chips. Amazon's total bet in the entire AI race amounts to $58 billion, flowing simultaneously to two opponents正在 battling head-on in the enterprise market.

This isn't just a hyperscale cloud vendor's diversified betting; it's a more precise structure: Amazon is both Anthropic's "strategic ally and largest backer" and the new cloud foundation OpenAI is using to "replace Microsoft."

When the two companies compete for the same pool of enterprise customers, the channel they are争夺 happens to be Amazon's Bedrock platform, a platform that simultaneously distributes models from both companies. Whichever company has a higher conversion rate on Bedrock, Amazon profits, but OpenAI and Anthropic lose out to each other.

Under pressure from continuously eroding enterprise market share and structural cracks in the Microsoft partnership, OpenAI chose to rebuild the narrative with a carefully calculated numbers war, simultaneously using Amazon to re-layout its distribution管道. When the three sets of numbers are taken apart, this competition is more complex than either side wants you to see.

Related Questions

QWhat is the key difference in revenue recognition between Anthropic and OpenAI as highlighted in the internal memo?

AAnthropic uses a gross revenue recognition method, booking the full amount a customer pays through AWS as top-line revenue and treating Amazon's platform share as a cost. OpenAI uses a net method, recording only the portion it actually receives from Microsoft, excluding Microsoft's share from its top-line revenue.

QAccording to the Ramp AI Index data mentioned, what is the current trend in enterprise market share between Anthropic and OpenAI?

AAs of April, Anthropic's share among enterprise paying customers rose to 30.6%, while OpenAI's was 35.2%. The gap has narrowed from 11 percentage points in February to just 4.6 points. At Anthropic's recent growth rate of +6.3 percentage points per month, it is projected to overtake OpenAI in this metric within approximately two months.

QWhat significant advantage does the memo concede that Anthropic has over OpenAI in the enterprise market, and what caution does it add?

AThe memo concedes that Anthropic has a 'significant lead' in enterprise customers due to its programming capabilities. However, it cautions that 'You do not want to be a single-product company in a platform war,' implying that Claude's programming advantage must extend to the platform level to be sustainable.

QWhat is the projected compute capacity gap between OpenAI and Anthropic by 2030 according to their respective plans?

AOpenAI plans to reach 30 gigawatts of compute capacity by 2030 through its Stargate project and a $300 billion cloud deal with Oracle. Anthropic's path, relying on a Broadcom custom chip deal and Google Cloud, aims for 7-8 gigawatts by the end of 2027. Even if Anthropic meets its goal, there would be a four-fold gap compared to OpenAI's 2030 target.

QHow is Amazon's role described in the competition between OpenAI and Anthropic?

AAmazon is simultaneously a strategic ally and the largest investor in Anthropic, having invested $80 billion, and is also the new cloud foundation for OpenAI, which is seeking to replace Microsoft. Amazon's Bedrock platform distributes models from both companies, meaning Amazon profits regardless of which company wins enterprise customers on its platform, while OpenAI and Anthropic directly compete against each other there.

Related Reads

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit1h ago

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit1h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit2h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit2h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of S (S) are presented below.

活动图片