Author: hoidya | 0xU
1/ What Exactly is the Memory Storage Industry?
The memory storage industry primarily consists of three core products: DRAM, NAND, and HBM. Together, they form the data memory system for all digital devices. Whether it's smartphones, computers, or data centers, they all rely on this layer of infrastructure to handle temporary data processing and long-term storage.
Functionally, DRAM is used for temporary storage of data during operation, meeting the high-speed read/write demands of the computation process. NAND is used for long-term data storage, akin to the device's persistent memory layer. HBM, on the other hand, is a new form evolved for high-performance computing environments, designed to address the bandwidth bottleneck between GPUs and computing units.
From a system architecture perspective, the storage industry is not an independent component separate from computing systems; rather, it is a fundamental dependency layer for all computing systems. Any computing task must first "read data," then "compute," and finally "write back the results." Therefore, storage is one of the foundational constraints in the computing process, not an optional module.
Over the past two decades, the demand in this industry has mainly come from three sources: consumer electronics (phones and PCs), enterprise servers, and internet infrastructure. The common characteristics of these demands are strong decentralization, updatable and delayable cycles, and limited scale per demand point. Consequently, the market has long classified it as a typical cyclical semiconductor industry.
2/ Why Has Storage Long Been Viewed as a Cyclical Industry?
The fundamental reason why the storage industry has long exhibited strong cyclicality lies in the asymmetry of its supply-demand structure. Demand typically correlates with consumer electronics cycles and enterprise IT spending cycles, while supply is driven by wafer fab investments, which have a significant time lag.
When demand rises, prices increase rapidly, prompting manufacturers to expand production. However, due to the typical 12- to 24-month lead time for new capacity construction, new supply often floods the market after the demand peak has passed, leading to a rapid price decline. This mechanism creates a typical boom-bust cycle.
This cyclical structure was particularly evident from 2010 to 2022. For instance, the DRAM industry experienced cycles of rapid decline from high-profit margins to losses, followed by rebounds when new demand recovered. This volatility has led the market to long regard the storage industry as a "high-volatility, low-predictability" cyclical asset class.
During this phase, the industry's pricing mechanism was essentially inventory-driven. Prices rose when inventory fell and dropped when inventory accumulated, with demand itself acting more as a triggering variable rather than a structural one.
3/ What Was the Demand Structure Like Before AI?
Before the advent of Artificial Intelligence, storage demand was primarily driven by consumer electronics and traditional internet infrastructure. Consumer electronics are characterized by long replacement cycles and relatively predictable demand, such as the typical 2-3 year smartphone upgrade cycle. Server and enterprise storage demand relied more on the rhythm of IT capital expenditures, also exhibiting strong cyclicality.
In this structure, storage as a standardized product was priced mainly by supply-demand dynamics, not by long-term, locked-in demand from any single large customer. Thus, the market had a highly spot nature, with price signals quickly reflecting inventory changes and capacity adjustments.
In other words, before AI, the demand structure of the storage industry was fragmented and lacked long-term rigid constraints. This was also the core foundation for its cyclical characteristics.
4/ Why Has AI Completely Transformed the Storage Demand Structure? (From Cyclical Commodity to Infrastructure)
Historically, storage demand was driven by consumer electronics (phones, PCs), which is essentially "deferrable consumption." But AI brings a completely different demand function: it is a persistent computing system, and memory usage grows linearly or even super-linearly with model size.
Taking AI data centers as an example, during training and inference, the GPU is not the computational bottleneck; the memory bandwidth is. This directly pushes HBM into becoming a rigid demand. Industry data shows that the demand for high-bandwidth memory from AI servers is growing at a rate far exceeding that of traditional DRAM, leading to long-term lock-ups of HBM capacity, with some reports even indicating pre-sales through 2026.
More critically, the supply side is changing: because HBM offers significantly higher profit margins than traditional DRAM, manufacturers are actively reallocating capacity, shifting wafers from DDR4/DDR5 to HBM production. This structural crowding-out effect is causing "non-demand-driven shortages" in traditional DRAM and NAND.
Extreme market signals are already appearing: spot prices for some DRAM and NAND products have risen 15–20% within a quarter, and "intra-day price adjustments" have emerged.
5/ How Was Storage Priced in the Past?
Between 2010 and 2022, the pricing mechanism in the storage industry was highly typical, a standard semiconductor cycle model:
Prices were driven by inventory cycles, not by demand structure.
When inventory decreased → prices rose → manufacturers expanded production → oversupply emerged → prices collapsed.
The core constraints of this mechanism were the "lag in capacity construction (1–2 years) + deferrable nature of demand."
For example, in the previous cycle, the DRAM industry frequently experienced substantial profit volatility on a quarterly basis, even swinging from high margins to losses and back rapidly.
However, this mechanism has been disrupted in the AI era because two variables have changed simultaneously:
- First, demand has shifted from fragmented consumption to centralized procurement.
- Second, supply has shifted from "free-market capacity expansion" to "profit-prioritized allocation (HBM first)."
The result is: cyclical fluctuations still exist, but price elasticity has been structurally compressed.
6/ What Structural Changes Are Happening Now?
The core change in the current (2024–2026) memory market is not just price increases, but a market structure shift from a "spot market" to a "contract allocation system."
First is the crowding-out effect of HBM. Because HBM yields significantly higher profit per wafer than DDR4/DDR5, Samsung, SK hynix, and Micron are all prioritizing capacity allocation towards HBM production. Industry data shows HBM is rapidly rising from a low single-digit share to a structural level of 40%+ of DRAM revenue.
This structural adjustment leads to two outcomes:
- First, contraction in traditional DRAM supply.
- Second, NAND enters a state of passive tightness.
Simultaneously, the market is entering an extreme state of supply-demand imbalance: DRAM industry revenue grew 17.1% year-over-year in Q2 2025, but the source of growth was not a demand explosion; it was jointly driven by price increases and supply constraints.
More extreme signals come from the delivery side: industry lead times have extended from the normal 8–12 weeks to 39–52 weeks, with some automotive-grade memory even exceeding 70 weeks.
This signifies a key structural change: memory is no longer an "immediately tradable commodity" but has become a "rationed resource."
This creates a positive feedback loop:
Price increases → manufacturers reduce spot supply → buyers lock in orders early → further reduces spot liquidity → prices continue to rise.
7/ Who Benefits in This Structure?
The profit structure within the storage industry is undergoing a clear migration.
Tier One: Supply Side (Samsung / SK hynix / Micron)
These companies are transitioning from "cyclical manufacturers" to "AI infrastructure suppliers." Among them, SK hynix's leading position in HBM is gradually making it a holder of structural pricing power, with its DRAM market share reportedly rising to around 38%.
Tier Two: Demand Side (Microsoft / AWS / Google)
These companies are locking in future supply through long-term contracts, essentially engaging in "time arbitrage": using current capital expenditure to lock in future AI computing power and memory costs.
Tier Three: AI Model Companies (OpenAI, etc.)
They are caught between cash flow pressure and compute demand, forming a closed loop through financing → capex → locking in supply.
The key change is that: pricing power is shifting from the "market" to "contract structures."
8/ Risks and Falsification Conditions
This round of the "AI memory supercycle" has at least three clear falsification conditions:
First, if AI capex enters a contraction cycle (hyperscalers reduce investment intensity), the current demand structure would quickly distort, as memory demand is highly dependent on AI compute expansion.
Second, if the HBM technology path is superseded (e.g., by new memory architectures or compute-memory fusion), the current HBM price premium would be compressed, causing capacity to flow back to DRAM/NAND.
Third, if the capacity expansion cycle re-accelerates (Samsung / SK hynix re-enter aggressive expansion), the current supply constraints could reverse into an oversupply cycle within 1–2 years.
In other words, the premise for this structure's validity is:
AI demand growth rate > capacity expansion rate + technology substitution rate





