Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbitОпубликовано 2026-04-26Обновлено 2026-04-26

Введение

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent...

Within 4 days, Amazon announced an additional $25 billion investment, and Google announced an investment of up to $40 billion—two direct competitors betting over $65 billion on the same AI startup. Rather than examining Anthropic's development from a VC perspective, it's better to see it as the start of the latest round in the cloud wars.

Google and Amazon are direct competitors in the global cloud market. The fact that they are simultaneously betting on the same model company is itself an anomalous signal—they would rather contain each other than let the other exclusively control this strategic asset. There may be three hidden signals here:

  1. The giants are investing in Anthropic not for equity, but for pre-sold compute orders. Every cent of the $65 billion given by Amazon and Google comes with rebate clauses—the money Anthropic receives must be spent, on the scale of hundreds of billions of dollars, back on the investors' cloud services and chips. The essence of this transaction is that compute suppliers are securing big customers for their own production capacity.
  2. The competitive logic of the cloud market has been completely rewritten. Enterprises used to choose cloud based on price and stability; now they choose "whose cloud runs the best models." Models have hijacked compute power. Whoever loses the anchor at the model layer loses enterprise customers. OpenAI is welded to Microsoft, making Anthropic the only contested target for Google and Amazon.
  3. AI infrastructure in China and the US is showing different evolutionary tendencies—but this isn't a simple "closed vs. open" binary opposition. Both markets simultaneously have closed-loop and open lines; the difference lies in the proportion of dominant forces and the depth of binding. DeepSeek's open-source route offers the Chinese market an alternative option different from the US's tri-polar closed loop, but the sustainability of this option remains to be verified.

The Word "Investment" Obscures the Real Transaction Structure

$65 billion, flowing to the same company, the same opponent, competing for an anchor point that cannot be lost.

Translated into business language: Amazon's terms are: $5 billion upfront, with the subsequent $20 billion兑现 upon reaching "specific commercial milestones." In return, Anthropic commits to spending over $100 billion on AWS technology products over the next decade, covering Amazon's self-developed AI chips Trainium and their next-generation products.

Google's terms are: $10 billion upfront, with an additional $30 billion if Anthropic meets performance targets, while Google Cloud provides approximately 5 gigawatts of compute power over the next five years. What is 5 gigawatts? Equivalent to the output power of a medium-sized power station.

The money flows both ways.

Anthropic gets the funding, but while getting the money, it must also spend it back—on the investors' cloud services, on their chips, in their built compute clusters. This isn't the VC model of "giving you money to burn"; it's closer to compute supplier financing: giants use investment to lock in a big customer, essentially securing pre-sold orders for their own compute capacity.

More bluntly: Google and Amazon are betting not on how high Anthropic's valuation will rise, but on how much compute power Claude will continuously consume.

Here, "why Anthropic" is more worth answering than "why so much money."

To understand why Google and Amazon must bet on Anthropic, one must first understand what Microsoft has already captured, and why they have no second choice.

Microsoft began deeply binding with OpenAI as early as 2019, with a $1 billion investment plus exclusive Azure compute support,换来优先 deployment rights for OpenAI models on Azure Cloud. Subsequently, with the explosion of GPT-3 and GPT-4, countless enterprise customers migrated to Azure to use the most advanced models—not for Microsoft's servers, but for OpenAI's models. The choice of compute power has been hijacked by models. Moreover, the relationship between Microsoft and OpenAI has continued to deepen in recent years, with exclusivity clauses making it almost impossible for other cloud vendors to intervene.

Competition in the cloud market has shifted from "whose servers are cheaper" to "whose cloud runs the best models." Whoever loses the anchor at the model layer loses the migration costs of enterprise customers. The OpenAI anchor has long been seen as Microsoft's possession, but in the past year, cracks have appeared in their relationship—OpenAI has been actively pursuing multi-cloud distribution and advancing the independent layout of its own compute infrastructure, no longer treating Azure as its only commercial outlet.

This反而 makes Google and Amazon more anxious: they are betting on Anthropic not just because they can't get OpenAI, but because OpenAI is growing up and building its own compute power; no one can独占 it forever.

Google has Gemini, Amazon has Nova, but the penetration of these self-developed models in the enterprise end is far less than that of Claude and GPT. Within the window of this generation of models, the cost-effectiveness of catching up加速 may not be high; binding with an already proven model company is more practical. Anthropic's Claude is the only target worth binding in this window.

One number is enough to illustrate the point: On April 7, 2026, Anthropic disclosed that its annualized revenue (ARR) had reached $30 billion, a 233% increase from $9 billion at the end of 2025. $30 billion ARR is not a vision on a PPT; it's revenue堆出来的 by paid contracts—Claude has become the most irreplaceable non-self-developed model in the enterprise AI market.

And just this month, Anthropic also acquired the biotech AI startup Coefficient Bio,成立仅8个月, for approximately $400 million, marking its first expansion into the life sciences field. Anthropic's story is no longer just that of a model company—it is becoming an AI infrastructure layer spanning multiple vertical industries.

This is why Google and Amazon would rather "feed a competitor" than let it be tied up by someone else.

Why Now—Cloud War "Land Grab" Again

If the explosion of large models in 2023 was the technical元年, 2025 was the落地元年 for multimodal and Agent, then the main theme of 2026 has become: AI resources are accelerating concentration towards leading platforms, and the landscape of the cloud war is being rewritten.

Several noteworthy events have occurred in the past few months:

  • September 2025: Anthropic completed a $13 billion Series F funding round, with its valuation soaring to $183 billion;
  • Same month: Anthropic announced cooperation with Google and Broadcom to build a 3.5-gigawatt-level TPU cluster;
  • February 2026: Anthropic completed a Series G funding round (amount undisclosed), with a valuation of approximately $350 billion;
  • March 12, 2026: Anthropic announced the Claude Partner Network, investing $100 million to build an enterprise落地 ecosystem;
  • April 6, 2026: Broadcom's SEC filing disclosed that Anthropic signed agreements with Google and Broadcom to obtain approximately 3.5 gigawatts of next-generation TPU compute power starting in 2027, with the agreements extending to 2031.

These series of actions point to the same conclusion: Anthropic has passed the stage of "proving itself" and entered the track of "scaling expansion." Its needs are compute power, talent, and customers; the cloud giants' needs are to lock in this customer and prevent competitors from locking it in. The interests of both sides converge precisely at this moment.

$350 billion—this figure independently verified by Amazon and Google within 4 days—is the market's pricing for this land grab war.

For Anthropic, these two sums of money are certainly beneficial. ARR breaking $30 billion,两大云厂保驾护航, IPO筹码空前充足—these are the best of times.

But the other side of the coin is equally real.

First constraint: Independence is being eroded. Being equity-held by two direct competitors is extremely rare in business history. Google has Gemini, Amazon is pushing Nova; their relationship with Anthropic is both cooperative and potentially competitive. When interests diverge—for example, if Anthropic's product roadmap directly conflicts with the investors' self-developed models—Anthropic will have to walk a tightrope between its two "landlords."

When Google first invested $300 million in Anthropic in 2023, it acquired about 10% of the shares and has continued to add to its stake. Now, with both giants having a say in the boardroom, every strategic decision of Anthropic must find a balance point in the缝隙 between two lines of interest.

Second constraint: The safety narrative is under pressure. Anthropic built its identity on "Constitutional AI," drawing a line with OpenAI through "safety first." But this month, its flagship model Claude Mythos, which prompted an exceptional deployment by the White House,恰恰是因为 its powerful cyber offensive and defensive capabilities raised complex national security considerations. Earlier, in February 2026, the Trump administration ordered federal agencies to stop using Claude. The double-edged effect of the safety narrative is emerging—Anthropic is, for the first time, unable to act according to its own principles because its model is too powerful. As commercial pressure continues to mount,坚守 the "safety boundary" will become increasingly difficult.

Third constraint: The sword of Damocles of IPO. Behind the $350 billion valuation is the need for investors to have an exit path. The public market's patience for growth stories is limited. When "Dario's ideal" meets the pressure of quarterly earnings reports, how long can the narrative of a "public benefit company" hold? This is a suspense left for 2027. Market voices estimate that, based on常规稀释比例, Anthropic's IPO fundraising scale could reach the level of tens of billions of dollars—though this estimate has not been officially confirmed.

China's AI Industry: The Same Two Lines, Different Proportions

Returning to the industry level, the profound impact of these two transactions may only be fully assessed years later.

One background needs to be explained: Anthropic's Claude, like OpenAI's GPT, follows a closed-source commercial route—model weights are not open; enterprises can only call them through Anthropic's official API or合作云 platforms (AWS Bedrock, Google Cloud Vertex AI). This means that when the world's three largest cloud service providers—Microsoft, Google, Amazon—are all using capital means to bind closed-source model companies, they form three sets of "model-cloud" exclusive bindings: OpenAI can only get optimal deployment conditions on Azure; Claude's compute commitments must spend money back on AWS and Google Cloud.

If enterprise customers are simply calling Claude's API for text generation, switching models只需改几行代码, migration cost is not high. But if Claude has been deeply embedded into business processes—building a complete Agent system based on Claude, all prompt engineering tuned for Claude, internal knowledge base deeply integrated with Claude Enterprise,同时使用了AWS配套的合规和审计服务—then the cost of changing models is not just changing an API endpoint. Agent behavior becomes unpredictable due to model differences, prompts need retuning, tool calling chains need rewriting, compliance systems need readaptation.

As the integration depth of AI in the enterprise end continues to increase, this migration cost will only go up. The格局 of AI infrastructure is evolving towards a "tri-polar closed loop." Within each pole, models, compute power, and cloud services form an internal cycle. For deeply integrated enterprise customers, the space for choice is narrowing.

But this is not the whole picture in the US. Meta's Llama model has always been open source,拥有全球最庞大的开源模型生态; Musk open-sourced Grok in 2025; Mistral also follows an open-source route. These open-source models can同样 be freely deployed by any cloud vendor, any enterprise. On the US AI map, the "tri-polar closed loop" and "open-source public goods"两条线 coexist,只是 the former currently far exceeds the latter in commercial scale and capital investment.

Pulling the focus back to China.

Alibaba and Tencent jointly investing in DeepSeek—a company known for its open-source models—almost同步 with Google and Amazon's joint investment in Anthropic, is easily read as the Sino-US version of the same story. But the underlying logic is vastly different.

First, look at the nature of the models themselves. Claude is closed-source; model weights are never open; enterprises can only call it through Anthropic's official API or specific cloud platforms. DeepSeek follows an open-source route; model weights are publicly released; any cloud vendor, any enterprise can deploy it themselves. This fundamental difference determines that the game structures of the two investments are completely different: closed-source models can be独占 by cloud vendors, used to lock in enterprise customers; open-source models天然 lack exclusivity; any cloud vendor can deploy them; there is no such thing as "exclusive model rights."

Next, look at the transaction structure. Google and Amazon's $65 billion investment is compute pre-sales: every cent Anthropic receives comes with cloud service procurement commitments—on Amazon's side, it's over $100 billion in AWS spending over the next decade; on Google's side, it's about 5 gigawatts of compute consumption over five years. Money comes in from the left and goes out from the right, forming a closed loop. What the investors want is not equity appreciation, but the investee's continuous consumption of their own compute power.

Whereas the investment by Alibaba and Tencent in DeepSeek, based on currently disclosed information, is closer to pure equity investment, without similar compute procurement binding clauses. The two structures solve different problems: compute pre-sales solve the cloud vendors' capacity digestion, while pure equity investment solves the locking of strategic席位.

But it must also be acknowledged that China's cloud market also has closed-loop forces. Alibaba has Tongyi Qianwen, Tencent has Hunyuan, Baidu has Wenxin, Huawei has Pangu—the self-developed models of the four major cloud vendors also operate in a closed-source API call mode, also forming a "model-cloud" binding relationship on their respective cloud platforms. Zhipu's core product ChatGLM also provides closed-source APIs. Chinese enterprise customers who deeply integrate a cloud vendor's self-developed model will同样 face increasing migration costs.

The difference lies in the proportion and the counterbalancing forces.

More值得关注的 is the ecological effect this loose coupling model may bring. The US is moving towards a "tri-polar closed loop": Microsoft+OpenAI, Google+Anthropic, Amazon+Anthropic—each closed loop is an exclusive binding of a closed-source model with a specific cloud platform.

Open-source models, as "public goods" in the AI infrastructure layer, lower the threshold for model usage across the entire industry, allowing more SMEs and application developers to participate in AI innovation. From this perspective, DeepSeek's existence does offer the Chinese market an alternative option different from the pure closed-loop route.

But whether this option can persist depends on three conditions: whether DeepSeek's open-source route can坚持 under commercial pressure, whether the investment from Alibaba and Tencent will not gradually evolve into exclusive compute binding, and whether Chinese independent model companies can continue to invest in the pursuit of compute diversification.

For now, none of these three conditions have a definite answer.

$65 billion in 4 days is not buying Anthropic's future. It's buying a ticket to not be defined as a "bystander" in an era where AI is redefining everything.

And this ticket is becoming increasingly expensive—so expensive that no giant dares to缺席, and no startup dares to rely solely on itself to survive. (This article was first published on Titanium Media APP, author | AGI Signal, editor | Qin Conghui)

Связанные с этим вопросы

QWhy are Google and Amazon investing billions in Anthropic, a direct competitor in the cloud market?

AGoogle and Amazon are investing in Anthropic to secure a strategic asset in the AI model layer, ensuring they don't lose enterprise customers to competitors like Microsoft, which is deeply tied to OpenAI. The investments are essentially pre-orders for compute capacity, as Anthropic is required to spend the funds on AWS and Google Cloud services, locking in a major client for their infrastructure.

QWhat is the nature of the financial agreements between Anthropic and its investors Google and Amazon?

AThe agreements are structured as compute pre-sales. Amazon's investment includes an initial $5 billion with up to $20 billion more based on milestones, and Anthropic must spend over $100 billion on AWS over the next decade. Google's investment starts at $10 billion, potentially reaching $40 billion, with Anthropic committing to use around 5 gigawatts of compute power on Google Cloud over five years.

QHow does the competition in the cloud market change with the rise of AI models like Anthropic's Claude?

ACloud competition has shifted from factors like price and stability to which cloud platform hosts the best AI models. Models now dictate compute choices, and losing a key model anchor like Anthropic could lead to a loss of enterprise clients. This has led to a 'model-cloud' binding strategy, where cloud providers secure exclusive or preferred relationships with leading model companies.

QWhat are the potential constraints and challenges Anthropic faces despite massive investments?

AAnthropic faces constraints on its independence due to competing interests from Google and Amazon, pressure on its safety-first narrative as models become powerful enough to raise national security concerns, and the future challenge of managing investor expectations for a potential IPO while maintaining its 'public benefit' company ethos.

QHow does the AI infrastructure landscape in China compare to the U.S., particularly regarding open-source vs. closed-source models?

AThe U.S. is moving toward a 'three-pole closed-loop' system with major cloud providers binding exclusively to closed-source models like OpenAI and Anthropic, while also having open-source alternatives. In China, companies like DeepSeek offer open-source models, providing a non-exclusive alternative, but closed-source models from major cloud providers like Alibaba and Tencent also exist. The key difference is the proportion and depth of these binding relationships, with open-source models acting as a public good lowering barriers for broader innovation.

Похожее

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit6 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit6 ч. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Как купить ERA

Добро пожаловать на HTX.com! Мы сделали приобретение Caldera (ERA) простым и удобным. Следуйте нашему пошаговому руководству и отправляйтесь в свое крипто-путешествие.Шаг 1: Создайте аккаунт на HTXИспользуйте свой адрес электронной почты или номер телефона, чтобы зарегистрироваться и бесплатно создать аккаунт на HTX. Пройдите удобную регистрацию и откройте для себя весь функционал.Создать аккаунтШаг 2: Перейдите в Купить криптовалюту и выберите свой способ оплатыКредитная/Дебетовая Карта: Используйте свою карту Visa или Mastercard для мгновенной покупки Caldera (ERA).Баланс: Используйте средства с баланса вашего аккаунта HTX для простой торговли.Третьи Лица: Мы добавили популярные способы оплаты, такие как Google Pay и Apple Pay, для повышения удобства.P2P: Торгуйте напрямую с другими пользователями на HTX.Внебиржевая Торговля (OTC): Мы предлагаем индивидуальные услуги и конкурентоспособные обменные курсы для трейдеров.Шаг 3: Хранение Caldera (ERA)После приобретения вами Caldera (ERA) храните их в своем аккаунте на HTX. В качестве альтернативы вы можете отправить их куда-либо с помощью перевода в блокчейне или использовать для торговли с другими криптовалютами.Шаг 4: Торговля Caldera (ERA)С легкостью торгуйте Caldera (ERA) на спотовом рынке HTX. Просто зайдите в свой аккаунт, выберите торговую пару, совершайте сделки и следите за ними в режиме реального времени. Мы предлагаем удобный интерфейс как для начинающих, так и для опытных трейдеров.

587 просмотров всегоОпубликовано 2025.07.17Обновлено 2025.07.17

Как купить ERA

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на ERA (ERA) представлены ниже.

活动图片