Within 4 days, Amazon announced an additional $25 billion investment, and Google announced an investment of up to $40 billion—two direct competitors betting over $65 billion on the same AI startup. Rather than examining Anthropic's development from a VC perspective, it's better to see it as the start of the latest round in the cloud wars.
Google and Amazon are direct competitors in the global cloud market. The fact that they are simultaneously betting on the same model company is itself an anomalous signal—they would rather contain each other than let the other exclusively control this strategic asset. There may be three hidden signals here:
- The giants are investing in Anthropic not for equity, but for pre-sold compute orders. Every cent of the $65 billion given by Amazon and Google comes with rebate clauses—the money Anthropic receives must be spent, on the scale of hundreds of billions of dollars, back on the investors' cloud services and chips. The essence of this transaction is that compute suppliers are securing big customers for their own production capacity.
- The competitive logic of the cloud market has been completely rewritten. Enterprises used to choose cloud based on price and stability; now they choose "whose cloud runs the best models." Models have hijacked compute power. Whoever loses the anchor at the model layer loses enterprise customers. OpenAI is welded to Microsoft, making Anthropic the only contested target for Google and Amazon.
- AI infrastructure in China and the US is showing different evolutionary tendencies—but this isn't a simple "closed vs. open" binary opposition. Both markets simultaneously have closed-loop and open lines; the difference lies in the proportion of dominant forces and the depth of binding. DeepSeek's open-source route offers the Chinese market an alternative option different from the US's tri-polar closed loop, but the sustainability of this option remains to be verified.
The Word "Investment" Obscures the Real Transaction Structure
$65 billion, flowing to the same company, the same opponent, competing for an anchor point that cannot be lost.
Translated into business language: Amazon's terms are: $5 billion upfront, with the subsequent $20 billion兑现 upon reaching "specific commercial milestones." In return, Anthropic commits to spending over $100 billion on AWS technology products over the next decade, covering Amazon's self-developed AI chips Trainium and their next-generation products.
Google's terms are: $10 billion upfront, with an additional $30 billion if Anthropic meets performance targets, while Google Cloud provides approximately 5 gigawatts of compute power over the next five years. What is 5 gigawatts? Equivalent to the output power of a medium-sized power station.
The money flows both ways.
Anthropic gets the funding, but while getting the money, it must also spend it back—on the investors' cloud services, on their chips, in their built compute clusters. This isn't the VC model of "giving you money to burn"; it's closer to compute supplier financing: giants use investment to lock in a big customer, essentially securing pre-sold orders for their own compute capacity.
More bluntly: Google and Amazon are betting not on how high Anthropic's valuation will rise, but on how much compute power Claude will continuously consume.
Here, "why Anthropic" is more worth answering than "why so much money."
To understand why Google and Amazon must bet on Anthropic, one must first understand what Microsoft has already captured, and why they have no second choice.
Microsoft began deeply binding with OpenAI as early as 2019, with a $1 billion investment plus exclusive Azure compute support,换来优先 deployment rights for OpenAI models on Azure Cloud. Subsequently, with the explosion of GPT-3 and GPT-4, countless enterprise customers migrated to Azure to use the most advanced models—not for Microsoft's servers, but for OpenAI's models. The choice of compute power has been hijacked by models. Moreover, the relationship between Microsoft and OpenAI has continued to deepen in recent years, with exclusivity clauses making it almost impossible for other cloud vendors to intervene.
Competition in the cloud market has shifted from "whose servers are cheaper" to "whose cloud runs the best models." Whoever loses the anchor at the model layer loses the migration costs of enterprise customers. The OpenAI anchor has long been seen as Microsoft's possession, but in the past year, cracks have appeared in their relationship—OpenAI has been actively pursuing multi-cloud distribution and advancing the independent layout of its own compute infrastructure, no longer treating Azure as its only commercial outlet.
This反而 makes Google and Amazon more anxious: they are betting on Anthropic not just because they can't get OpenAI, but because OpenAI is growing up and building its own compute power; no one can独占 it forever.
Google has Gemini, Amazon has Nova, but the penetration of these self-developed models in the enterprise end is far less than that of Claude and GPT. Within the window of this generation of models, the cost-effectiveness of catching up加速 may not be high; binding with an already proven model company is more practical. Anthropic's Claude is the only target worth binding in this window.
One number is enough to illustrate the point: On April 7, 2026, Anthropic disclosed that its annualized revenue (ARR) had reached $30 billion, a 233% increase from $9 billion at the end of 2025. $30 billion ARR is not a vision on a PPT; it's revenue堆出来的 by paid contracts—Claude has become the most irreplaceable non-self-developed model in the enterprise AI market.
And just this month, Anthropic also acquired the biotech AI startup Coefficient Bio,成立仅8个月, for approximately $400 million, marking its first expansion into the life sciences field. Anthropic's story is no longer just that of a model company—it is becoming an AI infrastructure layer spanning multiple vertical industries.
This is why Google and Amazon would rather "feed a competitor" than let it be tied up by someone else.
Why Now—Cloud War "Land Grab" Again
If the explosion of large models in 2023 was the technical元年, 2025 was the落地元年 for multimodal and Agent, then the main theme of 2026 has become: AI resources are accelerating concentration towards leading platforms, and the landscape of the cloud war is being rewritten.
Several noteworthy events have occurred in the past few months:
- September 2025: Anthropic completed a $13 billion Series F funding round, with its valuation soaring to $183 billion;
- Same month: Anthropic announced cooperation with Google and Broadcom to build a 3.5-gigawatt-level TPU cluster;
- February 2026: Anthropic completed a Series G funding round (amount undisclosed), with a valuation of approximately $350 billion;
- March 12, 2026: Anthropic announced the Claude Partner Network, investing $100 million to build an enterprise落地 ecosystem;
- April 6, 2026: Broadcom's SEC filing disclosed that Anthropic signed agreements with Google and Broadcom to obtain approximately 3.5 gigawatts of next-generation TPU compute power starting in 2027, with the agreements extending to 2031.
These series of actions point to the same conclusion: Anthropic has passed the stage of "proving itself" and entered the track of "scaling expansion." Its needs are compute power, talent, and customers; the cloud giants' needs are to lock in this customer and prevent competitors from locking it in. The interests of both sides converge precisely at this moment.
$350 billion—this figure independently verified by Amazon and Google within 4 days—is the market's pricing for this land grab war.
For Anthropic, these two sums of money are certainly beneficial. ARR breaking $30 billion,两大云厂保驾护航, IPO筹码空前充足—these are the best of times.
But the other side of the coin is equally real.
First constraint: Independence is being eroded. Being equity-held by two direct competitors is extremely rare in business history. Google has Gemini, Amazon is pushing Nova; their relationship with Anthropic is both cooperative and potentially competitive. When interests diverge—for example, if Anthropic's product roadmap directly conflicts with the investors' self-developed models—Anthropic will have to walk a tightrope between its two "landlords."
When Google first invested $300 million in Anthropic in 2023, it acquired about 10% of the shares and has continued to add to its stake. Now, with both giants having a say in the boardroom, every strategic decision of Anthropic must find a balance point in the缝隙 between two lines of interest.
Second constraint: The safety narrative is under pressure. Anthropic built its identity on "Constitutional AI," drawing a line with OpenAI through "safety first." But this month, its flagship model Claude Mythos, which prompted an exceptional deployment by the White House,恰恰是因为 its powerful cyber offensive and defensive capabilities raised complex national security considerations. Earlier, in February 2026, the Trump administration ordered federal agencies to stop using Claude. The double-edged effect of the safety narrative is emerging—Anthropic is, for the first time, unable to act according to its own principles because its model is too powerful. As commercial pressure continues to mount,坚守 the "safety boundary" will become increasingly difficult.
Third constraint: The sword of Damocles of IPO. Behind the $350 billion valuation is the need for investors to have an exit path. The public market's patience for growth stories is limited. When "Dario's ideal" meets the pressure of quarterly earnings reports, how long can the narrative of a "public benefit company" hold? This is a suspense left for 2027. Market voices estimate that, based on常规稀释比例, Anthropic's IPO fundraising scale could reach the level of tens of billions of dollars—though this estimate has not been officially confirmed.
China's AI Industry: The Same Two Lines, Different Proportions
Returning to the industry level, the profound impact of these two transactions may only be fully assessed years later.
One background needs to be explained: Anthropic's Claude, like OpenAI's GPT, follows a closed-source commercial route—model weights are not open; enterprises can only call them through Anthropic's official API or合作云 platforms (AWS Bedrock, Google Cloud Vertex AI). This means that when the world's three largest cloud service providers—Microsoft, Google, Amazon—are all using capital means to bind closed-source model companies, they form three sets of "model-cloud" exclusive bindings: OpenAI can only get optimal deployment conditions on Azure; Claude's compute commitments must spend money back on AWS and Google Cloud.
If enterprise customers are simply calling Claude's API for text generation, switching models只需改几行代码, migration cost is not high. But if Claude has been deeply embedded into business processes—building a complete Agent system based on Claude, all prompt engineering tuned for Claude, internal knowledge base deeply integrated with Claude Enterprise,同时使用了AWS配套的合规和审计服务—then the cost of changing models is not just changing an API endpoint. Agent behavior becomes unpredictable due to model differences, prompts need retuning, tool calling chains need rewriting, compliance systems need readaptation.
As the integration depth of AI in the enterprise end continues to increase, this migration cost will only go up. The格局 of AI infrastructure is evolving towards a "tri-polar closed loop." Within each pole, models, compute power, and cloud services form an internal cycle. For deeply integrated enterprise customers, the space for choice is narrowing.
But this is not the whole picture in the US. Meta's Llama model has always been open source,拥有全球最庞大的开源模型生态; Musk open-sourced Grok in 2025; Mistral also follows an open-source route. These open-source models can同样 be freely deployed by any cloud vendor, any enterprise. On the US AI map, the "tri-polar closed loop" and "open-source public goods"两条线 coexist,只是 the former currently far exceeds the latter in commercial scale and capital investment.
Pulling the focus back to China.
Alibaba and Tencent jointly investing in DeepSeek—a company known for its open-source models—almost同步 with Google and Amazon's joint investment in Anthropic, is easily read as the Sino-US version of the same story. But the underlying logic is vastly different.
First, look at the nature of the models themselves. Claude is closed-source; model weights are never open; enterprises can only call it through Anthropic's official API or specific cloud platforms. DeepSeek follows an open-source route; model weights are publicly released; any cloud vendor, any enterprise can deploy it themselves. This fundamental difference determines that the game structures of the two investments are completely different: closed-source models can be独占 by cloud vendors, used to lock in enterprise customers; open-source models天然 lack exclusivity; any cloud vendor can deploy them; there is no such thing as "exclusive model rights."
Next, look at the transaction structure. Google and Amazon's $65 billion investment is compute pre-sales: every cent Anthropic receives comes with cloud service procurement commitments—on Amazon's side, it's over $100 billion in AWS spending over the next decade; on Google's side, it's about 5 gigawatts of compute consumption over five years. Money comes in from the left and goes out from the right, forming a closed loop. What the investors want is not equity appreciation, but the investee's continuous consumption of their own compute power.
Whereas the investment by Alibaba and Tencent in DeepSeek, based on currently disclosed information, is closer to pure equity investment, without similar compute procurement binding clauses. The two structures solve different problems: compute pre-sales solve the cloud vendors' capacity digestion, while pure equity investment solves the locking of strategic席位.
But it must also be acknowledged that China's cloud market also has closed-loop forces. Alibaba has Tongyi Qianwen, Tencent has Hunyuan, Baidu has Wenxin, Huawei has Pangu—the self-developed models of the four major cloud vendors also operate in a closed-source API call mode, also forming a "model-cloud" binding relationship on their respective cloud platforms. Zhipu's core product ChatGLM also provides closed-source APIs. Chinese enterprise customers who deeply integrate a cloud vendor's self-developed model will同样 face increasing migration costs.
The difference lies in the proportion and the counterbalancing forces.
More值得关注的 is the ecological effect this loose coupling model may bring. The US is moving towards a "tri-polar closed loop": Microsoft+OpenAI, Google+Anthropic, Amazon+Anthropic—each closed loop is an exclusive binding of a closed-source model with a specific cloud platform.
Open-source models, as "public goods" in the AI infrastructure layer, lower the threshold for model usage across the entire industry, allowing more SMEs and application developers to participate in AI innovation. From this perspective, DeepSeek's existence does offer the Chinese market an alternative option different from the pure closed-loop route.
But whether this option can persist depends on three conditions: whether DeepSeek's open-source route can坚持 under commercial pressure, whether the investment from Alibaba and Tencent will not gradually evolve into exclusive compute binding, and whether Chinese independent model companies can continue to invest in the pursuit of compute diversification.
For now, none of these three conditions have a definite answer.
$65 billion in 4 days is not buying Anthropic's future. It's buying a ticket to not be defined as a "bystander" in an era where AI is redefining everything.
And this ticket is becoming increasingly expensive—so expensive that no giant dares to缺席, and no startup dares to rely solely on itself to survive. (This article was first published on Titanium Media APP, author | AGI Signal, editor | Qin Conghui)







