The Next Earthquake in AI: Why the Real Danger Isn't the SaaS Killer, But the Computing Power Revolution?

marsbitОпубликовано 2026-02-12Обновлено 2026-02-12

Введение

The next seismic shift in AI isn't about SaaS disruption but a fundamental revolution in computing power. While many focus on AI applications like Claude Cowork replacing traditional software, the real transformation is happening beneath the surface: a dual revolution in algorithms and hardware that threatens NVIDIA’s dominance. First, algorithmic efficiency is advancing through architectures like MoE (Mixture of Experts), which activates only a fraction of a model’s parameters during computation. DeepSeek-V2, for example, uses just 9% of its 236 billion parameters to match GPT-4’s performance, decoupling AI capability from compute consumption and slashing training costs by up to 90%. Second, specialized inference hardware from companies like Cerebras and Groq is replacing GPUs for AI deployment. These chips integrate memory directly onto the processor, eliminating latency and drastically reducing inference costs. OpenAI’s $10 billion deal with Cerebras and NVIDIA’s acquisition of Groq signal this shift. Together, these trends could collapse the total cost of developing and running state-of-the-art AI to 10-15% of current GPU-based approaches. This paradigm shift undermines NVIDIA’s monopoly narrative and its valuation, which relies on the assumption that AI growth depends solely on its hardware. The real black swan event may not be an AI application breakthrough but a quiet technical report confirming the decline of GPU-centric compute.

Written by: Bruce

Lately, the entire tech and investment communities have been fixated on the same thing: how AI applications are "killing" traditional SaaS. Since @AnthropicAI's Claude Cowork demonstrated how easily it can help you write emails, create PowerPoint presentations, and analyze Excel spreadsheets, a panic about "software is dead" has begun to spread. This is indeed frightening, but if your gaze stops here, you might be missing the real earthquake.

It's as if we're all looking up at the drone dogfight in the sky, but no one notices that the entire continental plate beneath our feet is quietly shifting. The real storm is hidden beneath the surface, in a corner most people can't see: the foundation of computing power that supports the entire AI world is undergoing a "silent revolution."

And this revolution might end the grand party hosted by AI's shovel seller—Nvidia @nvidia—sooner than anyone imagined.

Two Converging Paths of Revolution

This revolution isn't a single event but the convergence of two seemingly independent technological paths. They are like two armies closing in, forming a pincer movement against Nvidia's GPU hegemony.

The first path is the slimming revolution in algorithms.

Have you ever wondered if a superbrain really needs to mobilize all its brain cells when thinking? Obviously not. DeepSeek figured this out with their Mixture of Experts (MoE) architecture.

You can think of it as a company with hundreds of experts in different fields. But every time you need to solve a problem, you only call upon the two or three most relevant experts, rather than having everyone brainstorm together. This is the cleverness of MoE: it allows a massive model to activate only a small fraction of "experts" during each computation, drastically saving computing power.

What's the result? The DeepSeek-V2 model nominally has 236 billion "experts" (parameters), but it only needs to activate 21 billion of them each time it works—less than 9% of the total. Yet its performance is comparable to GPT-4, which requires 100% full operation. What does this mean? AI capability and its computing power consumption are decoupling!

In the past, we assumed that the stronger the AI, the more GPUs it would burn. Now, DeepSeek shows us that through clever algorithms, the same results can be achieved at one-tenth the cost. This directly puts a huge question mark on the essential need for Nvidia GPUs.

The second path is the "lane-changing" revolution in hardware.

AI work is divided into two phases: training and inference. Training is like going to school—it requires reading countless books, and GPUs, with their "brute force" parallel computing capabilities, are indeed useful here. But inference is like our daily use of AI, where response speed is more critical.

GPUs have an inherent flaw in inference: their memory (HBM) is external, and data transfer back and forth causes latency. It's like a chef whose ingredients are in a fridge in the next room—every time they cook, they have to run over to get them, and no matter how fast they are, it's still slow. Companies like Cerebras and Groq have taken a different approach, designing dedicated inference chips with memory (SRAM) directly integrated onto the chip, placing the ingredients right at hand and achieving "zero latency" access.

The market has already voted with real money. OpenAI, while complaining about Nvidia's GPU inference performance, turned around and signed a $10 billion deal with Cerebras to specifically rent their inference services. Nvidia itself panicked and spent $20 billion to acquire Groq, just to avoid falling behind in this new race.

When the Two Paths Converge: A Cost Avalanche

Now, let's put these two things together: running a "slimmed-down" DeepSeek model on a "zero-latency" Cerebras chip.

What happens?

A cost avalanche.

First, the slimmed-down model is small enough to be loaded entirely into the chip's built-in memory at once. Second, without the bottleneck of external memory, AI response speed becomes astonishingly fast. The final result: training costs drop by 90% due to the MoE architecture, and inference costs drop by another order of magnitude due to specialized hardware and sparse computing. In the end, the total cost of owning and operating a world-class AI could be just 10%-15% of the traditional GPU solution.

This isn't an improvement; it's a paradigm shift.

Nvidia's Throne Is Quietly Having the Rug Pulled Out

Now you should understand why this is more fatal than the "Cowork panic."

Nvidia's multi-trillion-dollar market capitalization today is built on a simple story: AI is the future, and the future of AI depends on my GPUs. But now, the foundation of that story is being shaken.

In the training market, even if Nvidia maintains its monopoly, if customers can do the job with one-tenth the GPUs, the overall size of this market could shrink significantly.

In the inference market, a cake ten times larger than training, Nvidia not only lacks an absolute advantage but is facing a siege from various players like Google and Cerebras. Even its biggest customer, OpenAI, is defecting.

Once Wall Street realizes that Nvidia's "shovel" is no longer the only—or even the best—option, what will happen to the valuation built on the expectation of "permanent monopoly"? I think we all know.

So, the biggest black swan in the next six months might not be which AI application has taken out whom, but a seemingly insignificant piece of tech news: for example, a new paper on the efficiency of MoE algorithms, or a report showing a significant increase in the market share of dedicated inference chips, quietly announcing that the computing power war has entered a new phase.

When the shovel seller's shovel is no longer the only option, his golden age may well be over.

Связанные с этим вопросы

QWhat is the core argument of the article regarding the next major shift in AI?

AThe article argues that the next major disruption in AI is not the threat of AI applications killing traditional SaaS, but rather a 'silent revolution' in the computational power (compute) that underpins the entire AI world. This revolution, driven by algorithmic efficiency and new hardware, could undermine Nvidia's dominance.

QHow does the MoE (Mixture of Experts) architecture, as exemplified by DeepSeek-V2, challenge the traditional relationship between AI capability and compute consumption?

AThe MoE architecture challenges the traditional relationship by decoupling AI capability from compute consumption. DeepSeek-V2, with 236 billion parameters, only activates 21 billion (less than 9%) for a given task, achieving performance comparable to models that require 100% activation. This means similar performance can be achieved at a fraction of the computational cost.

QWhat is the fundamental hardware limitation of GPUs for AI inference, and how do companies like Cerebras and Groq address it?

AThe fundamental limitation for GPUs in AI inference is the latency caused by external, high-bandwidth memory (HBM), where data must travel back and forth. Companies like Cerebras and Groq address this by designing specialized inference chips with on-chip memory (SRAM), enabling 'zero-latency' access to data and significantly faster processing speeds.

QWhat potential market impact does the convergence of algorithmic 'slimming' and hardware 'lane-changing' revolutions have?

AThe convergence of these two revolutions could cause a 'cost avalanche.' Training costs could drop by 90% due to MoE architectures, and inference costs could drop by an order of magnitude due to specialized hardware. The total cost of owning and operating a world-class AI could be just 10-15% of the cost of traditional GPU-based solutions, fundamentally reshaping the market.

QWhy does the article suggest that Nvidia's dominant market valuation is at risk?

ANvidia's valuation is built on the premise that its GPUs are the essential 'picks and shovels' for the AI future. This premise is being undermined as algorithmic efficiency reduces the total number of GPUs needed for training, and specialized inference chips from competitors like Cerebras and Google capture market share. If the market perceives Nvidia's hardware as no longer the only or best option, its 'permanent monopoly' valuation could collapse.

Похожее

Who's at the CFTC Table? A Redistribution of American Innovative Financial Discourse Power

On February 12, 2026, the U.S. Commodity Futures Trading Commission (CFTC) announced the formation of its Innovation Advisory Committee (IAC), a significant step signaling a shift from reactive oversight to collaborative governance in financial innovation. The committee comprises 35 members from diverse sectors, including major cryptocurrency exchanges (Coinbase, Kraken, Gemini), DeFi and blockchain infrastructure leaders (Uniswap, Solana, Chainlink), prediction markets (Polymarket, Kalshi), top investment firms (a16z, Paradigm), traditional financial institutions (Nasdaq, CME Group), and academic representatives. The IAC, replacing the former Technical Advisory Committee, is tasked with providing expert advice on emerging technologies like AI and blockchain, focusing on their impact on derivatives and commodity markets. It aims to help the CFTC develop adaptive regulatory frameworks that foster responsible innovation while maintaining market integrity. Key implications include the legitimization of prediction markets as financial instruments, official recognition of DeFi and public blockchain infrastructure, and the consolidation of compliance advantages for established crypto platforms. This initiative reflects the CFTC’s commitment to engaging with industry stakeholders early in the innovation process, balancing regulatory clarity with support for technological advancement in modern financial markets.

marsbit35 мин. назад

Who's at the CFTC Table? A Redistribution of American Innovative Financial Discourse Power

marsbit35 мин. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

2025 год — год институциональных инвесторов, в будущем он будет доминировать в приложениях реального времени.

1.6k просмотров всегоОпубликовано 2025.12.16Обновлено 2025.12.16

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на AI (AI) представлены ниже.

活动图片