Over the past three years, OpenAI has almost defined the public's first impression of large models: 400 million weekly active users, an $852 billion valuation, and ChatGPT, the AI entry point for the masses.
In public perception, it should have been the first company to achieve commercial success.
However, market data has brought an unexpected reversal:
On April 7, the American large model company Anthropic announced that its annualized revenue (ARR) has reached $30 billion.
This amount surpasses OpenAI's $25 billion (as of the end of February 2026), making Anthropic the independent large model company with the highest global revenue.
Why can a company without a ChatGPT entry point and without the platform scale of OpenAI run faster in revenue than OpenAI?
What Did Anthropic Bet On?
To understand Anthropic's surpassing, one must first understand its fundamental divergence from OpenAI.
Around 2022, after ChatGPT went viral, the entire industry was replicating chatbots, but Anthropic invested resources in seemingly boring infrastructure—API stability, context window expansion, and a model named Claude.
Anthropic's positioning from day one was not a better chat tool, but an engine embedded to enhance enterprise productivity.
This positioning restraint seemed conservative, even backward, during 2023 to 2024.
At that time, ChatGPT's daily active numbers were dozens of times that of Anthropic, and OpenAI's valuation was far ahead.
But when enterprise AI spending truly exploded in 2025, Anthropic's early layout began to pay off.
Its client list included professional information service providers like Thomson Reuters, tech unicorns in Silicon Valley, and a large number of leading institutions in finance, law, and healthcare. These clients are paying for productivity improvements.
Financial data shows that about 80% of Anthropic's revenue comes from enterprise clients, and many of these clients are billed based on usage. Once an enterprise scenario starts to be used frequently, revenue is quickly driven up.
In contrast, OpenAI's revenue structure is more diverse, currently still dominated by ChatGPT subscription revenue, with API and licensing business accounting for about 15%-20%.
While OpenAI tilted resources towards consumer features like ChatGPT's voice conversation and image generation, Anthropic continued to focus on enterprise capabilities.
Claude Code, launched by Anthropic in 2025, was hailed as a coding神器 (divine tool) by the developer community, becoming a productivity pillar for enterprise engineering teams.
Not only that, the difference in their pricing strategies is equally intriguing.
OpenAI's ChatGPT Plus caps revenue at $20, whether the user uses it once or a thousand times.
Anthropic adopts tiered pricing, ranging from $20 to $200, paying per Token consumption. The deeper the usage, the higher the revenue.
Behind this is enterprise clients' recognition of the value of productivity tools, far exceeding individual consumers' willingness to pay for chat entertainment.
An observation from a Silicon Valley venture capitalist hits the mark:
OpenAI is building a Disney facing consumers, while Anthropic is building a toll road to enterprise core systems. The former requires continuous innovation and marketing investment, while the latter, once built, has extremely low maintenance costs and tolls can be raised year after year.
The Gap-Widening Coding
Anthropic surpassed OpenAI with its enterprise strategy, but a question随之浮现 (subsequently emerges):
In its vast enterprise revenue landscape, what exactly supports the $30 billion volume?
The answer points to a seemingly vertical but actually extremely lethal track: Coding (programming assistance).
In Anthropic's enterprise business, Coding is not the only source of income.
Scenarios like document analysis, customer service automation, and legal review are all contributing revenue, but coding is undoubtedly the category with the strongest certainty, the fastest growth, and the best reflection of enterprise willingness to invest.
This certainty was extremely verified in the commercial performance of Claude Code.
In April 2025, the annualized revenue of this AI programming assistant for developers was only $17 million;
By November, this number soared to $1 billion, setting the fastest growth record in enterprise software history.
By February 2026, Claude Code's ARR had exceeded $2.5 billion, accounting for over 18% of Anthropic's total revenue.
More critical is the health of its revenue structure. Enterprise subscriptions account for more than half and have quadrupled since the beginning of the year.
This means that Claude Code not only contributes a large amount of revenue but has also become a super category for Anthropic to penetrate the enterprise business.
In contrast, OpenAI's response in the Coding field seemed slow and passive.
In OpenAI's revenue structure, Coding-related revenue was almost zero for a long time.
Data shows that until early 2026, the commercial contribution of OpenAI's specialized code products was still minimal.
When OpenAI realized the strategic value of the Coding track and turned to internally develop Codex, finally launching it in the form of a macOS application in February 2026, the market格局已定 (pattern was already set).
In terms of enterprise coding market share, according to Menlo Ventures estimates, OpenAI only holds 21%, far lower than Anthropic's 54%.
Although OpenAI's model capabilities have always been powerful, in the to B market, technological leadership is just an entry ticket.
Finding a strongly confirmed scenario like Coding and packaging the technology into productivity infrastructure that enterprises are willing to pay for continuously is the real moat.
And in China, this AI war concerning the enterprise level is unfolding in another form.
Domestic Manufacturers Seize the Enterprise Track
Anthropic used enterprise workflows to grow its revenue, OpenAI was forced to make up for lessons, a signal that manufacturers in China have certainly also seen.
Whether it's leading large factories or startups, they are competing for this first-mover commercialized battlefield with different entry methods.
For example, a more representative large factory like ByteDance is currently connecting underlying cloud and models, as well as upper-level applications like office and coding, to the same enterprise AI base.
According to IDC data, in the first half of 2025, Volcano Engine's MaaS market share reached 49.2%, ranking first.
Doubao large model call volume is even globally leading. As of April 2026, the number of enterprise customers with cumulative token usage exceeding 1 trillion has increased from 100 at the end of last year to 140.
This indicates that ByteDance is turning model calls into an infrastructure business that enterprise customers continuously consume.
Feishu is no longer just a collaboration tool but packages AI directly into enterprise packages.
Feishu official documents from April 2026 show that enterprises can directly purchase multi-level AI solutions like AI Basic, AI Business, AI Business Plus, and AI Enterprise.
And Trae, as ByteDance's coding entry point, launched an enterprise version in December 2025, bringing the developer workflow into this enterprise network.
Looking at startup large model companies, Zhipu's enterprise route is more obvious.
Financial reports show that Zhipu's full-year revenue for 2025 was 724 million yuan, of which localized deployment revenue accounted for 73.7%, a year-on-year increase of over 100%. Meanwhile, the API platform ARR grew 60 times in the past 12 months.
This indicates that Zhipu's revenue does not come solely from model hype but from two most typical types of enterprise payments: one is localized deployment, and the other is cloud API calls.
In the first quarter of 2026, after an 83% API price increase, Token call volume still increased by 400%. This pricing power, daring to raise prices without losing users, is unique among domestic manufacturers.
Zhipu is proving that even without a complete ecosystem like large factories, it is possible to initially get revenue through enterprise deployment and calls.
In fact, Anthropic's surpassing and the collective turn of domestic manufacturers point to a common conclusion:
In the enterprise AI market, having the strongest model capability is no longer the first priority; high-frequency workflows and quantifiable productivity improvements are the primary factors for enterprises choosing AI products.
In the second half of AI, the real competition has just begun.
This article is from the WeChat public account "World Model Factory", author: World Model Factory







