One Article to Understand the Profit Pools and Industry Landscape of the AI Storage Hierarchy

marsbitPublicado a 2026-05-14Actualizado a 2026-05-14

Resumen

**Deciphering the Profit Pools and Industry Landscape of the AI Storage Hierarchy** AI storage architecture can be divided into six distinct layers based on proximity to computing units: 1) On-chip SRAM, 2) HBM, 3) Motherboard DRAM, 4) CXL pooling layer, 5) Enterprise SSD, and 6) NAS & Cloud Object Storage. In 2025, the total market for these layers (excluding embedded SRAM value) was approximately $229 billion, with DRAM constituting half, HBM 15%, and SSD 11%. The profit landscape is highly concentrated, with over 90% market share in the top three layers for key players. These profit pools are categorized into three types: 1) High-margin, oligopolistic silicon layers (HBM, embedded SRAM, QLC SSD), 2) High-margin, emerging interconnect layers (CXL), and 3) Scalable, recurring-revenue service layers (NAS, Cloud Object Storage). **Key Layers Analysis:** * **On-chip SRAM:** Profits accrue primarily to TSMC via advanced wafer sales for AI chips. * **HBM:** The largest AI-era profit pool, driven by AI accelerator demand. SK Hynix (57-62% share), Samsung, and Micron dominate. HBM boasts exceptionally high margins (e.g., SK Hynix's 72% operating margin in Q1 2026) and is projected to grow at a ~40% CAGR to $100 billion by 2028. * **Motherboard DRAM:** The largest market by revenue ($121.8B in 2025), controlled by Samsung, SK Hynix, and Micron. High profitability is sustained as capacity shifts to HBM. * **CXL Pooling Layer:** Enables rack-level memory sharing for AI work...

Author: Godot

AI storage can be broken down into six layers,

1) On-chip SRAM

2) HBM

3) Motherboard DRAM

4) CXL Pooling Layer

5) Enterprise SSD

6) NAS and Cloud Object Storage

This hierarchy is based on the location of storage; the further down, the farther from the computing unit, and the larger the storage capacity.

In 2025, the total market for these six layers (SRAM is on compute chips, its embedded value needs to be excluded) was about $229 billion, with DRAM accounting for half, HBM for 15%, and SSD for 11%.

In terms of profit, each layer is highly concentrated among oligopolies, with the top three typically holding over 90% market share.

These profit pools can be divided into three categories,

1) High-margin oligopolistic pools at the silicon layer (HBM, embedded SRAM, QLC SSD)

2) High-margin emerging pools at the interconnect layer (CXL)

3) Scale-compounding pools at the service layer (NAS, Cloud Object Storage)

The three types of pools differ in nature, growth rate, and moats.

Why is Storage Layered?

Because the CPU responsible for control and the GPU responsible for computing only have temporary cache on the chip, namely on-chip SRAM cache. This cache space is too small, only enough to hold temporary parameters, and cannot accommodate large models.

Outside these two chips, larger external memory is needed to store the large models and the context for inference.

Computation is fast. The latency and energy consumption of moving data between different storage layers are the biggest issues.

Therefore, there are currently three main directions,

1) Stack HBM, placing memory next to the GPU to shorten the data transfer distance.

2) Use CXL to pool memory to the rack level, sharing capacity.

3) Integrate computing and storage on the same wafer, achieving compute-in-memory.

These three directions will shape the profit pool of each layer over the next five years.

The specific layers are detailed below,

L0 On-chip SRAM: A Profit Pool Exclusive to TSMC

SRAM (Static Random-access Memory) is the cache inside CPUs/GPUs, embedded in each chip and not traded separately.

The standalone SRAM chip market is only about $1–1.7 billion. Leaders are Infineon (~15%), Renesas (~13%), and ISSI (~10%)—a small market.

The profit pool for this part lies with TSMC. To fit more SRAM into each generation of AI chips, more wafers must be purchased.

Over 70% of the world's advanced process wafers are in TSMC's hands. The SRAM area of every H100, B200, TPU v5, etc., ultimately translates into TSMC's revenue.

L1 HBM: The Largest Profit Pool of the AI Era

HBM (High Bandwidth Memory) is high-bandwidth memory where DRAM (Dynamic Random-access Memory) dies are vertically stacked using TSV (Through-Silicon Via) technology and then attached next to the GPU via CoWoS packaging.

HBM almost single-handedly determines how large a model an AI accelerator can run. SK hynix, Micron, and Samsung have a near 100% market share.

As of Q1 2026, the latest market share breakdown is: SK hynix 57% to 62%, Samsung 22%, Micron 21%. SK hynix has secured significant procurement shares from companies like NVIDIA and is the dominant supplier.

Micron's Q1 FY2026 earnings call mentioned that the HBM TAM (Total Addressable Market) is expected to grow at a CAGR of ~40%, from about $35 billion in 2025 to $100 billion in 2028, reaching the $100 billion mark two years earlier than previous forecasts.

The core advantage of HBM lies in its extremely high profit margins. In Q1 2026, SK hynix's operating profit margin reached a record 72%.

Reasons for high profitability,

1) The TSV manufacturing process sacrifices some traditional DRAM capacity, keeping HBM in a state of supply shortage.

2) Improving advanced packaging yield is difficult; Samsung's previous market share drop from 40% to 22% was also affected by this.

3) Major suppliers have been relatively cautious in capacity expansion, and achieved a DRAM ASP (Average Selling Price) increase of over 60% QoQ in Q1 2026, demonstrating a clear seller's market.

Among the three giants, SK hynix, driven by strong HBM demand, achieved annual operating profit of 47.21 trillion KRW in 2025, surpassing Samsung Electronics for the first time in history. In Q1 2026, with a 72% operating margin, it even exceeded the profitability levels of TSMC (58.1%) and NVIDIA (65%).

Micron has high growth expectations, with Bank of America raising its target price to $950 in May 2026. Samsung, with the continuous progress of HBM4 mass production, has the largest room for market share recovery.

L2 Motherboard DRAM

This layer refers to what we commonly call memory modules (DIMMs).

Motherboard DRAM includes conventional memory products like DDR5, LPDDR, GDDR, MR-DIMM, etc. It is currently the part with the highest market sales share in the AI storage system. The global DRAM market reached approximately $121.83 billion in 2025.

Samsung, SK hynix, and Micron still dominate the vast majority of the market. According to the latest data from Q4 2025, Samsung ranked first with a 36.6% market share, SK hynix second with 32.9%, and Micron third with 22.9%.

The shift in production capacity towards higher-margin HBM has helped maintain high profitability and pricing power for memory. Although the single-product margin of conventional motherboard DRAM is not as high as HBM's, its overall market size is the largest.

L3 CXL Pooling Layer

CXL (Compute Express Link) allows DRAM to be "pooled" from a single server motherboard to the entire rack level.

With CXL 3.x and beyond, all memory in a rack can be shared and scheduled by multiple GPUs in the future, allocated on-demand. This solves the problem of KV cache, vector databases, and RAG indexes not fitting or being too cumbersome to move during AI inference.

The CXL memory module market was only $1.6 billion in 2024, projected to reach $23.7 billion by 2033. It appears the oligopoly of Samsung, SK hynix, and Micron will continue.

In this layer, Astera Labs focuses on Retimers and intelligent memory controllers between CXL and PCIe, holding about 55% share of this sub-market. Latest quarter revenue was $308 million, up 93% YoY; non-GAAP gross margin 76.4%; net profit up 85% YoY. It can be said to be quite lucrative.

L4 Enterprise SSD: The Biggest Beneficiary of the Inference Era

Enterprise NVMe SSDs are the main battleground for AI training checkpoints, RAG indexes, KV cache offloading, and model weight caching. High-capacity QLC SSDs have completely pushed HDDs out of AI data lakes.

The enterprise SSD market was about $26.1 billion in 2025, with a CAGR of 24%, projected to reach $76 billion by 2030.

As for the competitive landscape? Correct, still dominated by the three giants.

Market share by revenue in Q4 2025: Samsung 36.9%, SK hynix (including Solidigm) 32.9%, Micron 14.0%, Kioxia 11.7%, SanDisk 4.4%. The top five account for about 90%.

The biggest change in this layer is the explosion of QLC SSDs in AI inference scenarios. SK hynix's subsidiary Solidigm and Kioxia have already produced single-disk products with 122 TB capacity. AI inference KV cache and RAG indexes are spilling over from HBM to SSDs.

From a profit pool perspective, enterprise SSDs don't have the extreme gross margins of HBM but enjoy dual tailwinds of capacity-driven growth and inference expansion.

SK hynix (via Solidigm) and Kioxia are relatively pure plays. Samsung and SK hynix enjoy triple-layer benefits from HBM + DRAM + NAND, making them more comprehensive AI storage platform companies.

L5 NAS and Cloud Object Storage: The Compounding Pool of Data Gravity

NAS and Cloud Object Storage are the outermost layers for AI data lakes, training corpora, backup/archiving, and cross-team collaboration. In 2025, NAS was about $39.6 billion (CAGR 17%), and Cloud Object Storage about $9.1 billion (CAGR 16%).

Major vendors for enterprise file storage are NetApp, Dell, HPE, Huawei; for SMBs, Synology and QNAP. For Cloud Object Storage, using IaaS share estimates, AWS ~31–32%, Azure ~23–24%, Google Cloud ~11–12%, the three combined ~65–70%.

Profits in this layer mainly come from long-term hosting, data egress fees, and ecosystem lock-in.

To summarize,

1) DRAM has the largest market but the lowest gross margins (30–40%); HBM's market is only one-third of DRAM's, but its gross margin is double (60%+); CXL Retimers have the smallest market but the highest gross margin (76%+). The closer the layer is to computing, the scarcer and more lucrative it is.

2) Incremental profit pool growth primarily comes from three areas: HBM (CAGR 28%), Enterprise SSD (CAGR 24%), and CXL Pooling (CAGR 37%).

3) Each layer has different business barriers: HBM relies on technical barriers (TSV, CoWoS, yield ramp); CXL-type relies on IP and certification (single supply chain for Retimers); service-type relies on switching costs.

Preguntas relacionadas

QHow is the AI storage hierarchy structured, and what are the six layers mentioned in the article?

AThe AI storage hierarchy is divided into six layers based on proximity to the compute unit, with capacity increasing as distance increases. The layers are: 1) On-chip SRAM, 2) HBM, 3) Motherboard DRAM, 4) CXL Pooling Layer, 5) Enterprise SSD, and 6) NAS & Cloud Object Storage.

QWhich layer in the AI storage stack is described as the most lucrative profit pool in the AI era, and what are the market share dynamics among its key players?

AHBM is described as the largest profit pool in the AI era. As of Q1 2026, the market is dominated by SK Hynix (57-62%), Samsung (22%), and Micron (21%). SK Hynix leads, driven by strong demand from customers like NVIDIA.

QWhat are the three primary strategic directions mentioned for optimizing data movement between storage layers, and how do they impact future profit pools?

AThe three primary directions are: 1) Stacking HBM to shorten data transfer distances, 2) Using CXL for rack-level memory pooling and capacity sharing, and 3) Developing in-memory computing (e.g., on the same wafer). These trends will shape the profit pools across the storage hierarchy over the next five years.

QAccording to the article, what are the three distinct categories of profit pools in the AI storage landscape, and what are their key characteristics?

AThe three categories are: 1) High-margin Oligopoly Pools (e.g., HBM, embedded SRAM, QLC SSD) characterized by technological dominance and high margins; 2) High-margin Emerging Pools (e.g., CXL) driven by new interconnects; and 3) Scale & Recurring Revenue Pools (e.g., NAS, Cloud Object Storage) based on data gravity, long-term hosting, and ecosystem lock-in.

QWhich company is highlighted as capturing the profit pool for on-chip SRAM (L0), and what is the underlying reason?

ATSMC captures the profit pool for on-chip SRAM. This is because the need to integrate more SRAM into each new generation of AI chips (like NVIDIA's H100/B200, Google's TPU v5) requires more advanced semiconductor wafers, over 70% of which are supplied by TSMC. The SRAM area directly translates into TSMC's revenue.

Lecturas Relacionadas

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

This article, based on Anthropic's analysis, outlines the intensifying systemic competition between the U.S./allies and China for AI leadership by 2028. It argues that access to advanced computing power ("compute") is the critical bottleneck, where the U.S. currently holds a significant advantage through chip export controls and allied innovation. However, China's AI labs remain competitive by exploiting policy loopholes—via chip smuggling, overseas data center access, and "model distillation" attacks to copy U.S. model capabilities—keeping them close to the frontier. The piece presents two contrasting scenarios for 2028. In the first, decisive U.S. action to tighten compute controls and curb distillation locks in a 12-24 month AI capability lead, cementing democratic influence over global AI norms, security, and economic infrastructure. In the second, policy inaction allows China to achieve near-parity through continued access to U.S. technology, enabling Beijing to promote its AI stack globally and integrate advanced AI into its military and governance systems, altering the strategic balance. Anthropic contends that maintaining a decisive U.S. lead is essential for shaping safe AI development and governance. The core recommendation is for U.S. policymakers to urgently close compute and model access loopholes while promoting global adoption of the U.S. AI technology stack to secure a lasting strategic advantage.

marsbitHace 52 min(s)

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

marsbitHace 52 min(s)

“Why Didn’t You Buy 2x Long SK Hynix?”

The article discusses the immense popularity of the "2x Long SK Hynix ETF" (07709.HK) in Hong Kong, which became the world's largest single-stock leveraged ETF by May 2026. Launched in October 2025, the ETF's net value soared over 1000% in seven months, significantly outperforming the 324% gain of SK Hynix's underlying stock, driven by the AI boom and a critical shift in industry demand from computing power to memory. It highlights the mechanics and risks of daily-rebalanced leveraged ETFs. In a smooth bullish market, they generate amplified returns, but during volatile periods—exemplified by market swings during geopolitical tensions in the Strait of Hormuz in March-April 2026—they suffer severe "volatility decay," where choppy price action can cause losses far exceeding twice the drop of the underlying asset. The piece frames SK Hynix, as NVIDIA's primary HBM supplier, within the classic cycle of the memory chip industry—a commoditized sector prone to boom-and-bust cycles of shortage, price hikes, overcapacity, and crashes. While current AI-driven demand and high margins (Q1 2026毛利率~79%) create a "super cycle," the article questions its sustainability. It warns that extreme profits will inevitably tempt competitors like Samsung and Micron to ramp up HBM production, potentially eroding scarcity. Furthermore, the entire narrative remains tethered to the massive AI capital expenditure of tech giants. In conclusion, the ETF's trajectory symbolizes the accelerated, all-in nature of the current AI revolution, where timeframes are compressed and market moves are extreme. However, it also underscores that while industry trends define ultimate returns, macro-geopolitical risks dictate the volatile and uncertain path to get there.

marsbitHace 54 min(s)

“Why Didn’t You Buy 2x Long SK Hynix?”

marsbitHace 54 min(s)

a16z Crypto: A Guide to the CLARITY Act for Crypto Entrepreneurs

The CLARITY Act, a bipartisan crypto market structure bill, has advanced through the Senate Banking Committee, marking a potential historic shift in U.S. digital asset regulation. For years, a lack of clear rules has stifled innovation, pushed development overseas, and exposed consumers to risk. This bill aims to establish a comprehensive framework, providing long-needed regulatory clarity for blockchain networks and digital assets. It builds upon previous legislative efforts like FIT21 and the House version of CLARITY, which gained strong bipartisan support. CLARITY is crucial because it recognizes that blockchain networks are fundamentally different from traditional companies. Networks operate through decentralized, shared rules rather than centralized control. Applying corporate legal frameworks to networks forces them into a centralized model, concentrating power and value. In contrast, decentralized blockchain networks can function as user-owned public infrastructure, distributing value more equitably among participants. The bill seeks to enable the safe launch of networks in the U.S., clarify regulatory jurisdiction between the SEC and CFTC, oversee crypto exchanges, and enhance consumer protections. Its passage would align U.S. law with the nature of decentralized technology, allowing builders to operate transparently and fund projects domestically without structural compromises due to regulatory uncertainty. Similar to the positive impact seen after the stablecoin-focused GENIUS Act, CLARITY could unlock a new wave of innovation, helping the U.S. reclaim leadership in the crypto space while combating fraud and abuse.

链捕手Hace 1 hora(s)

a16z Crypto: A Guide to the CLARITY Act for Crypto Entrepreneurs

链捕手Hace 1 hora(s)

Trading

Spot
Futuros

Artículos destacados

Cómo comprar ONE

¡Bienvenido a HTX.com! Hemos hecho que comprar Harmony (ONE) sea simple y conveniente. Sigue nuestra guía paso a paso para iniciar tu viaje de criptos.Paso 1: crea tu cuenta HTXUtiliza tu correo electrónico o número de teléfono para registrarte y obtener una cuenta gratuita en HTX. Experimenta un proceso de registro sin complicaciones y desbloquea todas las funciones.Obtener mi cuentaPaso 2: ve a Comprar cripto y elige tu método de pagoTarjeta de crédito/débito: usa tu Visa o Mastercard para comprar Harmony (ONE) al instante.Saldo: utiliza fondos del saldo de tu cuenta HTX para tradear sin problemas.Terceros: hemos agregado métodos de pago populares como Google Pay y Apple Pay para mejorar la comodidad.P2P: tradear directamente con otros usuarios en HTX.Over-the-Counter (OTC): ofrecemos servicios personalizados y tipos de cambio competitivos para los traders.Paso 3: guarda tu Harmony (ONE)Después de comprar tu Harmony (ONE), guárdalo en tu cuenta HTX. Alternativamente, puedes enviarlo a otro lugar mediante transferencia blockchain o utilizarlo para tradear otras criptomonedas.Paso 4: tradear Harmony (ONE)Tradear fácilmente con Harmony (ONE) en HTX's mercado spot. Simplemente accede a tu cuenta, selecciona tu par de trading, ejecuta tus trades y monitorea en tiempo real. Ofrecemos una experiencia fácil de usar tanto para principiantes como para traders experimentados.

239 Vistas totalesPublicado en 2024.12.12Actualizado en 2025.03.21

Cómo comprar ONE

Discusiones

Bienvenido a la comunidad de HTX. Aquí puedes mantenerte informado sobre los últimos desarrollos de la plataforma y acceder a análisis profesionales del mercado. A continuación se presentan las opiniones de los usuarios sobre el precio de ONE (ONE).

活动图片