Listed and Halted, Surge Over 108% in a Single Day, Is Cerebras Really the 'Next Nvidia'?

Odaily星球日报Publicado a 2026-05-15Actualizado a 2026-05-15

Resumen

Cerebras Systems (CBRS), labeled the "next Nvidia," debuted on the NASDAQ on May 14th, 2025. Its stock price surged over 108% from its $185 IPO price, briefly touching $385 before settling around $311. CEO Andrew Feldman claimed the company's wafer-scale AI chips are "58 times larger and 15-20 times faster" than competitors like Nvidia. The company's core innovation is the Wafer Scale Engine (WSE), a massive, dinner-plate-sized chip designed to avoid the bottlenecks of interconnecting multiple GPUs. Its latest system, the CS-3, offers high-performance computing for AI training and inference. While still a niche player with $5.1 billion in 2025 revenue, Cerebras has secured major contracts, most notably a multi-year, over $20 billion computing deal with OpenAI. This partnership is deep: OpenAI is a major customer, a creditor via a $1 billion loan, and holds warrants that could make it a 10-11% shareholder in Cerebras. Despite the hype, the article argues Cerebras is unlikely to dethrone Nvidia soon. Nvidia's ecosystem (CUDA), vast scale, manufacturing efficiency, and diversified product line present a formidable moat. Cerebras faces high costs, production challenges with its giant chips, and competition from AMD, Google, and others. However, strong demand for AI inference and its key partnerships could support its stock price in the short to medium term. In conclusion, Cerebras is positioned as a high-speed specialist in the AI hardware market, not a broad-market replacement...

Original|Odaily Planet Daily(@OdailyChina)

Author|Wenser(@wenser 2010)

Last night, Cerebras (CBRS), dubbed the 'next Nvidia,' officially began trading. Shortly after opening at the $185 issue price, it surged to $350, peaking at $385 intraday with a gain of over 108%. Although the stock has now retraced to around $311, it still maintains a gain of over 68%. Previously, Cerebras CEO Andrew Feldman stated in an interview with CNBC: "Our chip is the size of a dinner plate and is 20 times faster than Nvidia's chips."

What gives this chipmaker, which raised $5.5 billion, the confidence to make such bold claims about being 'faster than Nvidia's chips'? How did it secure a $20 billion order from OpenAI amidst fierce competition? Will its stock price continue its upward trend in the short term? Odaily Planet Daily will provide its own answers to these questions in this article.

Cerebras' Basis for Challenging Nvidia: Opening a New World of AI with Wafer-Scale Chips

As the gap in AI computing power grows increasingly vast, robust market demand has propelled Nvidia to become the world's highest-valued listed company.

Recently, Nvidia's stock price hit new highs, with its market capitalization once exceeding $5.5 trillion. In terms of its scale, it has become an economic entity second only to the GDPs of the US and China, far surpassing major global economies like Germany and Japan, truly deserving the title of 'richer than many countries.'

However, unlike the decades-old 'veteran champion' Nvidia, Cerebras (CBRS) is a newcomer in chip manufacturing.

In 2016, Andrew Feldman, Gary Lauterbach, Sean Lie, Michael James, JP Fricker, and other semiconductor industry veterans co-founded Cerebras Systems, headquartered in Sunnyvale, California. Unlike Nvidia's focus on building general-purpose GPUs to maximize market demand, Cerebras' core innovation is the Wafer Scale Engine (WSE), currently the world's largest AI chip.

The founding team of Cerebras (2022)

Its core products include:

  • WSE-3: Area approximately 46,225 mm² (equivalent to a dinner plate size), containing 4 trillion transistors, 900,000 AI-optimized cores, delivering 125 petaflops of computing power. Compared to traditional GPUs, it turns an entire wafer into a single giant processor, avoiding bottlenecks from multi-GPU interconnects, with on-chip SRAM as high as 44GB and extremely high memory bandwidth.
  • CS-3 System: AI supercomputer based on WSE-3, supporting both training and inference; Currently, Cerebras not only sells chips but also provides cloud services (Cerebras Inference), dedicated data centers, and on-premises deployment technical support.

In terms of business model, Cerebras primarily provides ultra-low-latency inference for OpenAI, Meta, Perplexity, Mistral, GSK, Mayo Clinic, etc. In 2025, Cerebras generated annual revenue of $510 million (a 76% year-over-year increase), is already profitable, and has massive order backing (including a multi-year, hundreds-of-megawatt computing contract with OpenAI).

Illustration of Cerebras WSE-3 Chip

On May 14th, IPO day, Cerebras CEO Andrew Feldman responded positively on CNBC's 'Squawk Box' regarding the company's operational status, technological moat, and future market direction:

  • First, Feldman stated that the IPO was "the right way to fund our growth," the company is mature, and public markets can support huge growth opportunities. He emphasized this was the result of a decade of effort, expressed great pride, and said the market "understood our story and responded positively."
  • Second, he repeatedly stressed that Cerebras is the only company to have successfully built a 'giant chip' in 70 years, with all other attempts having failed, thus "the technical moat is wide and deep." It was here that he mentioned Cerebras' chips are 58 times larger than competitors like Nvidia and run 15-20 times faster, significantly accelerating AI inference and training.
  • Finally, addressing market concerns about the sustainability of AI spending, Feldman stated the demand is "massive and growing." The company's chips qualitatively change the AI experience (faster response, real-time agents, etc.). He mentioned important collaborations with OpenAI, AWS, etc., and expressed optimism about the overall AI hardware environment.

On a side note, similar to Musk's earlier bet with Anthropic on 'space data centers' (recommended reading: 'Musk and Anthropic Are Going to Space for Power'), Feldman also boldly predicted, "Within 15 years, data centers in space are highly likely to become a reality," showcasing his confidence in the long-term construction and rapid expansion of AI infrastructure.

Thus, as a 'speed demon' in the AI chip field, Cerebras has successfully broken through by focusing on extreme performance for super-large-scale models, emerging as a strong challenger to Nvidia in areas like large model inference and super-large-scale training applications.

In this regard, OpenAI's $20 billion order provides ample confidence for its development, and the collaboration between the two goes far beyond the simple relationship of 'chip manufacturer' and 'chip buyer.'

The Complex Relationship Between Cerebras and OpenAI: Customer, Creditor, and Potential Major Shareholder

The origins between Cerebras and OpenAI can be described as long-standing. Beyond company-level collaboration, OpenAI founders Sam Altman, co-founder Greg Brockman, and others were early angel investors in Cerebras, holding small stakes. This is perhaps a key reason for the deep, multi-faceted ties between the two companies today.

In December 2025, OpenAI provided Cerebras with a $1 billion Working Capital Loan, establishing a creditor-debtor relationship between them.

In January of this year, Cerebras and OpenAI's '750MW inference computing power procurement agreement' was officially announced, with subsequent emphasis on an option to expand this to 2GW. This was confirmed again in April. According to media reports, OpenAI plans to invest over $20 billion in the next three years to purchase servers powered by Cerebras chips and will acquire equity in the company as part of the deal. OpenAI thus became Cerebras' largest customer, bar none.

Image Source: @Xingpt

Subsequent S-1 filings and IPO application documents from Cerebras indicate that OpenAI is expected to obtain approximately 33.44 million Cerebras warrants at an extremely low exercise price of $0.00001 per share. Some warrants have vesting conditions, including compute power delivery dates and milestone requirements such as Cerebras' market cap exceeding $40 billion.

If all are exercised and conditions met, OpenAI could acquire about 10%-11% equity (specific percentage depends on post-IPO total shares). Based on the IPO valuation of around $56 billion, this equity would be worth approximately $5-6 billion; based on the current market cap (close to $95 billion after the first day of trading), this equity is now worth over $10.3 billion. Although not fully exercised yet, calling OpenAI a 'potential major shareholder of Cerebras' is already beyond doubt.

Image Source: @Xingpt

Whether Cerebras Can Become the Next Nvidia Remains Unknown, but Its Stock Price May Continue to Rise Short-Term

Returning to the initial third question: Can Cerebras become the next Nvidia?

From an industry landscape perspective, the answer is undoubtedly no. There are four main reasons:

  • First, a huge ecosystem gap: As the absolute hegemon in chip manufacturing, Nvidia's CUDA software stack is the undisputed industry standard, with countless developers, technical frameworks, and toolchains built upon it. While Cerebras has its own software stack, it is far from matching CUDA's maturity and compatibility, making the switching cost extremely high for many developers and enterprises.
  • Second, scale difference and diversified development paths: In 2025, Nvidia's revenue reached tens of billions of dollars, with its GPUs covering training, inference, graphics, automotive, data centers, and all scenarios. Jensen Huang even boldly proclaimed at CES 2026, "The AI chip and infrastructure market size could reach $1 trillion by 2027," with Nvidia being the largest beneficiary. In contrast, Cerebras' 2025 revenue was only $510 million, and its customer base is relatively concentrated with giants like OpenAI, making it less resilient to risks.
  • Third, differences in chip manufacturing and cost control: The super-large AI chip brings not only faster speeds but also higher manufacturing difficulty and costs. Cerebras' wafer-scale chip requires an entire wafer per chip, with TSMC facing low output, yield challenges, and high unit cost (a single CS-3 system costs far more than a single GPU). For Nvidia, dozens of GPUs can be cut from a single wafer, offering stronger economies of scale and higher economic returns.
  • Fourth, different competitive pressures in the chip industry: Unlike Nvidia's advantageous position, Cerebras faces direct competition from multiple industry players like Groq, AMD, Google TPU, and AWS Trainium. Although its current development momentum is strong, limited by time, funding, and resources, its current positioning is more like a "high-end niche player" rather than a "market dominator."

Based on the above, Cerebras cannot grow into an industry giant like Nvidia in the short term, nor can it disrupt the existing industry competition landscape. However, in terms of stock price comparison, its single-share price has already surpassed Nvidia's. Furthermore, thanks to the booming AI frenzy and the increasing computing power gap, with OpenAI and Anthropic yet to go public this year, Cerebras' stock price and market cap may still have some room for upward movement.

In the next 2-3 years, if it can successfully convert orders from OpenAI, AWS, etc., into actual revenue as scheduled, Cerebras' stock price may further explore higher levels; however, if order performance falls short of market expectations or AI model inference demand changes, its stock price will face significant downward pressure.

In summary, within 1-3 years, Cerebras cannot replace Nvidia, but it can occupy a certain share in the AI infrastructure niche market, becoming the 'king of AI chip speed.' As for the longer-term industry competition landscape, more time is needed for verification.

Recommended Reading

A Decade's Bet on Cerebras: How 'Wafer-Scale AI Chips' Reached Nasdaq

Cerebras AI Chip Breaks Nvidia Monopoly and Stands Out: In-Depth 10,000-Word Analysis of Cerebras Technology Design

Preguntas relacionadas

QWhat is the core innovation and main product that Cerebras Systems is built upon?

ACerebras Systems' core innovation is the Wafer Scale Engine (WSE), which is the world's largest AI chip. Their main product based on this is the WSE-3, a single massive processor the size of a dinner plate containing 4 trillion transistors and 900,000 AI-optimized cores, offering 125 petaflops of computing power. They also offer the CS-3 system, an AI supercomputer built around the WSE-3.

QAccording to the article, what is the multifaceted relationship between Cerebras and OpenAI beyond being a simple supplier-customer?

AThe relationship between Cerebras and OpenAI is complex and multifaceted. Beyond being a major customer with a multi-billion dollar chip supply agreement, OpenAI was an early angel investor in Cerebras. OpenAI also provided Cerebras with a $1 billion working capital loan, establishing a creditor-debtor relationship. Furthermore, as part of their supply deal, OpenAI holds warrants that could allow it to acquire a significant stake (potentially 10-11%) in Cerebras, making it a potential major shareholder.

QWhy does the article argue that Cerebras cannot become the next NVIDIA in the short term, listing at least two key reasons?

AThe article argues Cerebras cannot become the next NVIDIA in the short term for several reasons. First, there is a massive ecosystem gap: NVIDIA's CUDA software stack is the industry standard with unparalleled developer adoption, while Cerebras' software is less mature. Second, there is a huge scale and diversification difference: NVIDIA has revenues in the tens of billions covering a wide range of applications, whereas Cerebras' revenue is much smaller and more concentrated on a few large customers like OpenAI.

QWhat were the key performance highlights of Cerebras' (CBRS) IPO on its first day of trading?

AOn its first day of trading (IPO), Cerebras (CBRS) opened at an issue price of $185. Its stock price quickly surged, reaching a peak of $385, which represented an increase of over 108%. Although it later settled, it closed with a gain of over 68%, at approximately $311 per share.

QWhat did Cerebras CEO Andrew Feldman claim about the performance of their chip compared to competitors during his CNBC interview on IPO day?

ADuring his CNBC interview on IPO day, Cerebras CEO Andrew Feldman claimed that the company's chip is 58 times larger than competitors' chips (like NVIDIA's) and runs 15 to 20 times faster, significantly accelerating AI inference and training tasks.

Lecturas Relacionadas

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

OpenAI has announced a major internal reorganization just months before its anticipated IPO. The company is merging its three flagship product lines—ChatGPT, Codex, and the API platform—into a single, unified product organization. The most significant leadership change involves co-founder and President Greg Brockman moving from a background technical role to take full, permanent control over all product strategy. This follows the indefinite medical leave of AGI Deployment CEO Fidji Simo. Additionally, ChatGPT's longtime lead, Nick Turley, has been reassigned to enterprise products, with former Instagram executive Ashley Alexander taking over consumer offerings. The consolidation, internally framed as a strategic move towards an "Agentic Future," aims to break down internal silos and create a cohesive "Super App." This planned desktop application would integrate ChatGPT's conversational abilities, Codex's coding power, and a rumored internal web browser named "Atlas" to autonomously perform complex user tasks. The reorganization occurs amid significant internal and external pressures. OpenAI has recently seen a wave of high-profile departures, including Sora co-lead Bill Peebles and other senior technical leaders, leading to concerns about a thinning executive bench. Externally, rival Anthropic recently secured funding at a staggering $900 billion valuation, surpassing OpenAI's own. Google's upcoming I/O developer conference also poses a competitive threat. Analysts suggest the dramatic restructure is a pre-IPO move to present a clearer, more focused narrative to Wall Street—streamlining operations and demonstrating decisive leadership under Brockman to counter internal turbulence and intense market competition.

marsbitHace 2 hora(s)

Breaking: OpenAI Undergoes Major Reorganization, President Brockman Assumes Command

marsbitHace 2 hora(s)

Two Survival Structures of Market Makers and Arbitrageurs

Market makers and arbitrageurs represent two distinct survival structures in high-frequency trading. Market makers primarily use limit orders (makers) to profit from the bid-ask spread, enjoying high capital efficiency (nominally 100%) but bearing inventory risk. This "inventory risk" arises from passive, fragmented, and discontinuous order fills in the limit order book (LOB). This risk, while a potential cost, can also contribute to excess profit if managed within control boundaries, allowing for mean reversion. Market makers essentially sell "time" (uncertainty over execution timing) to the market for price control and low fees. In contrast, cross-exchange arbitrageurs typically use market orders (takers) to exploit price differences or funding rates, resulting in lower nominal capital efficiency (requiring capital on both exchanges) and higher transaction costs. Their risk exposure stems from asymmetries in exchange rules (e.g., minimum order sizes), execution latency, and infrastructure risks (e.g., ADL, oracle drift). These exposures are active, exogenous gaps that primarily erode profits rather than contribute to them. Arbitrageurs essentially sell "space" (capital sunk across venues) for localized, immediate certainty. Both strategies engage in a trade-off between execution friction and residual risk. Optimal systems allow for temporary, controlled risk exposure rather than enforcing zero exposure at all costs. Their evolution converges towards hybrid models: arbitrageurs may use maker orders to reduce costs, while market makers may use taker orders or hedges for risk management. Ultimately, both use different forms of risk exposure—market makers exposing inventory, arbitrageurs immobilizing capital—to extract marginal, hard-won certainty from the market.

链捕手Hace 2 hora(s)

Two Survival Structures of Market Makers and Arbitrageurs

链捕手Hace 2 hora(s)

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

This article, based on Anthropic's analysis, outlines the intensifying systemic competition between the U.S./allies and China for AI leadership by 2028. It argues that access to advanced computing power ("compute") is the critical bottleneck, where the U.S. currently holds a significant advantage through chip export controls and allied innovation. However, China's AI labs remain competitive by exploiting policy loopholes—via chip smuggling, overseas data center access, and "model distillation" attacks to copy U.S. model capabilities—keeping them close to the frontier. The piece presents two contrasting scenarios for 2028. In the first, decisive U.S. action to tighten compute controls and curb distillation locks in a 12-24 month AI capability lead, cementing democratic influence over global AI norms, security, and economic infrastructure. In the second, policy inaction allows China to achieve near-parity through continued access to U.S. technology, enabling Beijing to promote its AI stack globally and integrate advanced AI into its military and governance systems, altering the strategic balance. Anthropic contends that maintaining a decisive U.S. lead is essential for shaping safe AI development and governance. The core recommendation is for U.S. policymakers to urgently close compute and model access loopholes while promoting global adoption of the U.S. AI technology stack to secure a lasting strategic advantage.

marsbitHace 4 hora(s)

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

marsbitHace 4 hora(s)

Trading

Spot
Futuros
活动图片