Written by: Bruce
Lately, the entire tech and investment communities have been fixated on the same thing: how AI applications are "killing" traditional SaaS. Since @AnthropicAI's Claude Cowork demonstrated how easily it can help you write emails, create PowerPoint presentations, and analyze Excel spreadsheets, a panic about "software is dead" has begun to spread. This is indeed frightening, but if your gaze stops here, you might be missing the real earthquake.
It's as if we're all looking up at the drone dogfight in the sky, but no one notices that the entire continental plate beneath our feet is quietly shifting. The real storm is hidden beneath the surface, in a corner most people can't see: the foundation of computing power that supports the entire AI world is undergoing a "silent revolution."
And this revolution might end the grand party hosted by AI's shovel seller—Nvidia @nvidia—sooner than anyone imagined.
Two Converging Paths of Revolution
This revolution isn't a single event but the convergence of two seemingly independent technological paths. They are like two armies closing in, forming a pincer movement against Nvidia's GPU hegemony.
The first path is the slimming revolution in algorithms.
Have you ever wondered if a superbrain really needs to mobilize all its brain cells when thinking? Obviously not. DeepSeek figured this out with their Mixture of Experts (MoE) architecture.
You can think of it as a company with hundreds of experts in different fields. But every time you need to solve a problem, you only call upon the two or three most relevant experts, rather than having everyone brainstorm together. This is the cleverness of MoE: it allows a massive model to activate only a small fraction of "experts" during each computation, drastically saving computing power.
What's the result? The DeepSeek-V2 model nominally has 236 billion "experts" (parameters), but it only needs to activate 21 billion of them each time it works—less than 9% of the total. Yet its performance is comparable to GPT-4, which requires 100% full operation. What does this mean? AI capability and its computing power consumption are decoupling!
In the past, we assumed that the stronger the AI, the more GPUs it would burn. Now, DeepSeek shows us that through clever algorithms, the same results can be achieved at one-tenth the cost. This directly puts a huge question mark on the essential need for Nvidia GPUs.
The second path is the "lane-changing" revolution in hardware.
AI work is divided into two phases: training and inference. Training is like going to school—it requires reading countless books, and GPUs, with their "brute force" parallel computing capabilities, are indeed useful here. But inference is like our daily use of AI, where response speed is more critical.
GPUs have an inherent flaw in inference: their memory (HBM) is external, and data transfer back and forth causes latency. It's like a chef whose ingredients are in a fridge in the next room—every time they cook, they have to run over to get them, and no matter how fast they are, it's still slow. Companies like Cerebras and Groq have taken a different approach, designing dedicated inference chips with memory (SRAM) directly integrated onto the chip, placing the ingredients right at hand and achieving "zero latency" access.
The market has already voted with real money. OpenAI, while complaining about Nvidia's GPU inference performance, turned around and signed a $10 billion deal with Cerebras to specifically rent their inference services. Nvidia itself panicked and spent $20 billion to acquire Groq, just to avoid falling behind in this new race.
When the Two Paths Converge: A Cost Avalanche
Now, let's put these two things together: running a "slimmed-down" DeepSeek model on a "zero-latency" Cerebras chip.
What happens?
A cost avalanche.
First, the slimmed-down model is small enough to be loaded entirely into the chip's built-in memory at once. Second, without the bottleneck of external memory, AI response speed becomes astonishingly fast. The final result: training costs drop by 90% due to the MoE architecture, and inference costs drop by another order of magnitude due to specialized hardware and sparse computing. In the end, the total cost of owning and operating a world-class AI could be just 10%-15% of the traditional GPU solution.
This isn't an improvement; it's a paradigm shift.
Nvidia's Throne Is Quietly Having the Rug Pulled Out
Now you should understand why this is more fatal than the "Cowork panic."
Nvidia's multi-trillion-dollar market capitalization today is built on a simple story: AI is the future, and the future of AI depends on my GPUs. But now, the foundation of that story is being shaken.
In the training market, even if Nvidia maintains its monopoly, if customers can do the job with one-tenth the GPUs, the overall size of this market could shrink significantly.
In the inference market, a cake ten times larger than training, Nvidia not only lacks an absolute advantage but is facing a siege from various players like Google and Cerebras. Even its biggest customer, OpenAI, is defecting.
Once Wall Street realizes that Nvidia's "shovel" is no longer the only—or even the best—option, what will happen to the valuation built on the expectation of "permanent monopoly"? I think we all know.
So, the biggest black swan in the next six months might not be which AI application has taken out whom, but a seemingly insignificant piece of tech news: for example, a new paper on the efficiency of MoE algorithms, or a report showing a significant increase in the market share of dedicated inference chips, quietly announcing that the computing power war has entered a new phase.
When the shovel seller's shovel is no longer the only option, his golden age may well be over.