Amazon Invests Additional $25 Billion in Anthropic, AI Infrastructure 'Arms Race' Escalates

marsbitPubblicato 2026-04-21Pubblicato ultima volta 2026-04-21

Introduzione

Amazon announces an additional investment of up to $25 billion in Anthropic, with $5 billion delivered immediately and the remaining contingent on performance milestones. This follows a recent $50 billion investment in OpenAI, highlighting Amazon's strategy of backing leading AI labs. The deal includes a commitment from Anthropic to spend over $100 billion on AWS infrastructure over the next decade, securing up to 5 gigawatts of computing power to address growing demand and capacity constraints. Anthropic’s annualized revenue has surpassed $30 billion, but the company faces infrastructure strain due to rapid user growth. The investment will support scaling Claude’s capabilities using Amazon’s custom Trainium and Graviton chips. The move deepens integration between Anthropic and AWS, allowing Claude to be accessed natively within AWS services. Over 100,000 organizations already use Claude via Amazon Bedrock. This investment is part of a broader AI infrastructure race, with Amazon planning around $200 billion in capital expenditures this year, largely focused on expanding AI compute capacity.

Author: Claude, Deep Tide TechFlow

Deep Tide Guide: Amazon announced on Monday an additional investment of up to $25 billion in Anthropic (with $5 billion immediately available), securing a commitment from the latter for over $100 billion in AWS spending over the next decade.

This is Amazon's second hundred-billion-dollar check to a leading AI lab within two months—it had just invested $50 billion in OpenAI.

Anthropic's annualized revenue has exceeded $30 billion, but computing power bottlenecks are hampering the user experience; the core goal of this deal is to resolve the capacity crisis.

Amazon is placing bets on both of the AI field's top labs simultaneously, and the stakes are getting larger.

According to reports from CNBC, Bloomberg, and other media outlets on April 20, Amazon announced an additional investment of up to $25 billion in Anthropic, with $5 billion available immediately and the remaining $20 billion tied to specific business milestones. This investment is executed at Anthropic's $380 billion valuation from its Series G financing in February of this year. Combined with the previous cumulative investment of $8 billion, Amazon's total investment commitment to Anthropic now reaches a cap of $33 billion.

Two months ago, Amazon had just invested $50 billion in OpenAI, Anthropic's main competitor, and reached a cloud services agreement of a similar scale. Amazon CEO Andy Jassy stated in an announcement that Anthropic's commitment to running its large language models on AWS Trainium for up to ten years "reflects the progress we have made in the field of custom chips."

Following the news, Amazon's stock price rose approximately 2.5% in after-hours trading.

$100 Billion Cloud Commitment for 5 Gigawatts of Compute Power, Responding to OpenAI's 'Insufficient Compute' Allegations

The core of this deal is not just equity investment but a deeply binding infrastructure agreement.

Anthropic has committed to investing over $100 billion in AWS technology over the next decade, covering Amazon's custom AI chips Trainium (from Trainium2 to Trainium4 and future generations) and tens of millions of Graviton CPU cores. In exchange, Anthropic will receive up to 5 gigawatts of computing capacity for training and deploying Claude models. According to Anthropic's blog disclosure, the company currently uses over 1 million Trainium2 chips to train and serve Claude and plans to put nearly 1 gigawatt of Trainium2 and Trainium3 capacity into operation by the end of 2026.

This expansion in computing scale directly responds to recent public attacks from OpenAI. OpenAI's Chief Revenue Officer, Denise Dresser, claimed in an internal memo last week that Anthropic made a "strategic error by failing to secure sufficient computing power" and predicted that OpenAI would have 30 gigawatts of computing power by 2030, while Anthropic would have only 7 to 8 gigawatts by the end of 2027. In its announcement that day, Anthropic frankly admitted that demand for Claude from enterprises and developers is accelerating, and consumer usage has seen a "sharp increase," putting "inevitable pressure" on infrastructure and affecting reliability and performance during peak periods.

Anthropic CEO Dario Amodei stated in a declaration: "Users tell us that Claude is becoming increasingly important to their work, and we need to build infrastructure to keep up with the rapidly growing demand."

Amazon Writes Hundred-Billion-Dollar Checks to Two AI Labs in Two Months

Amazon's investment strategy is now very clear: bet on both top players in the AI race simultaneously.

In February of this year, Amazon announced an investment of up to $50 billion in OpenAI, also accompanied by a $100 billion AWS cloud service commitment. The structure of the deal with Anthropic is almost identical—$25 billion in investment plus a lock on over $100 billion in cloud spending. According to GeekWire, Amazon is executing the "same playbook" for both labs.

The two major AI companies are also racing to prove their strength to investors. According to CNBC, both Anthropic and OpenAI are preparing for potential IPOs that could land as early as this year. OpenAI's latest funding round valued it at over $850 billion, while Anthropic is valued at $380 billion. Anthropic claims its annualized revenue has exceeded $30 billion (approximately $9 billion at the end of 2025), while OpenAI's memo alleged that this figure was inflated by about $8 billion because Anthropic accounted for revenue from cloud partnerships with Amazon and Google on a gross rather than net basis.

Microsoft is also betting on both sides—it had already invested over $13 billion in OpenAI and in November 2025 invested up to $5 billion in Anthropic, which committed to purchasing $30 billion in Azure computing power.

Claude Platform Integrates with AWS, Battle for Over 100,000 Customers

Beyond investment, integration at the product level is also deepening.

According to the announcement, the native Claude platform will be directly embedded into AWS. Users will be able to access the full Claude console through their existing AWS accounts, permission controls, and billing systems, without needing additional registration or new contracts. This goes a step further than the previous offering of Claude services through the Amazon Bedrock marketplace. Amazon disclosed that over 100,000 organizations are currently running Claude models on Amazon Bedrock.

Anthropic also emphasized in its blog that Claude is the only frontier AI model simultaneously available on all three major global cloud platforms (AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure Foundry). This multi-platform strategy allows enterprise customers to flexibly choose their deployment path based on needs and is also one of Anthropic's differentiated advantages in competing with OpenAI.

On the client side, after Lyft used Claude via Amazon Bedrock to power its customer service AI assistant, the average resolution time for customer service was reduced by 87%. Pfizer uses Claude to help scientists perform voice searches in drug development documents, saving approximately 16,000 hours of retrieval time per year.

AI Infrastructure Race: Amazon's Capital Expenditure Expected to Reach $200 Billion This Year

The larger context embedding this deal is the AI infrastructure arms race among cloud computing giants.

Amazon stated in February that it expects capital expenditure to reach approximately $200 billion in 2026, with the vast majority directed toward AI infrastructure. The previously co-developed Project Rainier (a super-large-scale computing cluster with nearly 500,000 Trainium2 chips) was once one of the world's largest AI computing clusters, which Anthropic is using to train and deploy current and future versions of Claude.

Earlier this month, Anthropic also expanded its cooperation with Google and Broadcom, locking in computing power on the scale of "several gigawatts," expected to come online starting in 2027. Combined with this 5-gigawatt agreement with Amazon, Anthropic is expanding its computing power reserves across multiple lines simultaneously.

Amazon's custom chip business itself is also accelerating. Jassy recently revealed that the business's annualized revenue has exceeded $20 billion, doubling from the $10 billion reported earlier this year, which in his words is "on fire."

Domande pertinenti

QWhat is the total maximum investment commitment Amazon has made to Anthropic after the latest $25 billion addition?

AAmazon's total investment commitment to Anthropic has reached a maximum of $33 billion, which includes the previous $8 billion and the newly added up to $25 billion.

QWhat is the core infrastructure agreement between Amazon and Anthropic, and what does Anthropic receive in exchange?

AAnthropic commits to spending over $100 billion on AWS technology over the next decade, including Amazon's custom AI chips and CPU cores. In exchange, Anthropic will receive up to 5 gigawatts of computing capacity for training and deploying Claude models.

QHow does Amazon's investment strategy in AI labs appear, based on recent deals with Anthropic and OpenAI?

AAmazon is simultaneously betting on both leading AI labs, having invested up to $50 billion in OpenAI and up to $25 billion in Anthropic, each accompanied by a $100 billion cloud service commitment, following a similar playbook.

QWhat product integration advancement was announced between Anthropic's Claude platform and AWS?

AThe native Claude platform will be directly embedded into AWS, allowing users to access the full Claude console through their existing AWS accounts, permissions, and billing systems without needing separate registration or contracts.

QWhat is Amazon's projected capital expenditure for 2026, and what is the primary focus of this spending?

AAmazon expects its capital expenditure to reach approximately $200 billion in 2026, with the majority allocated towards AI infrastructure investments.

Letture associate

Summary of Kevin Warsh's Past Remarks: How Will This Prospective 'New Head' Upend the Fed?

Kevin Warsh, nominated by President Trump to replace Fed Chair Powell, is poised to introduce sweeping reforms at the Federal Reserve. His agenda includes structural changes, advocating for lower policy rates, a fundamentally new approach to inflation, a significantly smaller balance sheet, safeguarding Fed independence, narrowing the Fed’s mandate, improving coordination with the Treasury, and reducing communication “noise” from policymakers. Warsh has criticized current monetary policy as “broken” and called for “fundamental regime change,” arguing that continuity is meaningless when the Fed has lost credibility. He believes interest rates should be lower and that a smaller balance sheet would help achieve that, describing the current one as “multiple trillions of dollars larger than necessary.” On inflation, he attributes its rise to cognitive errors at the Fed—including overreliance on flawed models, neglect of money supply, and blaming external factors rather than excessive government spending. He also suggests AI could lead to a structural decline in prices. He strongly defends Fed independence as its “most important asset” and warns against mission creep, which he says risks its core objectives and increases political vulnerability. He proposes closer coordination with the Treasury to align balance sheet and debt issuance plans, clarifying expectations for markets. Regarding communication, Warsh supports transparency but criticizes the current “cacophony of voices” and “forward guidance” that creates ambiguity. He has urged Fed officials to speak less frequently to avoid market confusion.

marsbit25 min fa

Summary of Kevin Warsh's Past Remarks: How Will This Prospective 'New Head' Upend the Fed?

marsbit25 min fa

Arbitrum Pretends to Be the Hacker, 'Steals' Back the Money Lost by KelpDAO

Title: Arbitrum Poses as Hacker to Recover Stolen Funds from KelpDAO Last week, KelpDAO suffered a hack resulting in nearly $300 million in losses, marking the largest DeFi security incident this year. Approximately 30,765 ETH (worth over $70 million) remained on an Arbitrum address controlled by the attacker. In an unprecedented move, Arbitrum’s Security Council utilized its emergency authority to upgrade the Inbox bridge contract, adding a function that allowed them to impersonate the hacker’s address and initiate a transfer without access to its private key. The council’s action, approved by 9 of its 12 members, moved the stolen ETH to a frozen address in a single transaction before reverting the contract to its original state. The operation was coordinated with law enforcement, which attributed the attack to North Korea’s Lazarus Group. Community reactions are divided: some praise the recovery of funds, while others question the centralization of power, as the council can upgrade core contracts without governance votes. However, such emergency mechanisms are common among major L2s. Despite the partial recovery, over $292 million was stolen in total, with more than $100 million in bad debt on Aave and remaining funds scattered across other chains. The incident highlights escalating security challenges in DeFi, with state-sponsored hackers employing advanced tactics and L2s responding with elevated countermeasures.

marsbit35 min fa

Arbitrum Pretends to Be the Hacker, 'Steals' Back the Money Lost by KelpDAO

marsbit35 min fa

iQiyi Is Too Impatient

The article "iQiyi Is Too Impatient" discusses the controversy surrounding the Chinese streaming platform IQiyi's recent announcement of an "AI Actor Library" during its 2026 World Conference. IQiyi claimed over 100 actors, including well-known names like Zhang Ruoyun and Yu Hewei, had joined the initiative. CEO Gong Yu suggested AI could enable actors to "star in 14 dramas a year instead of 4" and that "live-action filming might become a world cultural heritage." The announcement quickly sparked backlash. Multiple actors named in the list issued urgent statements denying they had signed any AI-related authorization agreements. This forced IQiyi to clarify that inclusion in the library only indicated a willingness to *consider* AI projects, with separate negotiations required for any specific role. The incident, which trended on social media with hashtags like "IQiyi is crazy," is presented as a sign of the company's growing desperation. Facing intense competition from short-video platforms like Douyin and Kuaishou, as well as Bilibili and Xiaohongshu, IQiyi's financial performance has weakened, with revenues declining for two consecutive years. The author argues that IQiyi is "too impatient" to tell a compelling AI story to reassure the market, especially as it pursues a listing on the Hong Kong stock exchange. The piece concludes by outlining three key "AI questions" IQiyi must answer: defining its role as a tool provider versus a content creator, balancing the "coldness" of AI with the human element audiences desire, and properly managing the interests of platforms, actors, and viewers. The core dilemma is that while AI can reduce costs and increase efficiency, it risks creating homogenized, formulaic content and devaluing human performers.

marsbit1 h fa

iQiyi Is Too Impatient

marsbit1 h fa

Trading

Spot
Futures
活动图片