GPUs Have No 'Price': Four Indices Clash, the Compute Market Is More Chaotic Than You Think

marsbitОпубліковано о 2026-04-08Востаннє оновлено о 2026-04-08

Анотація

The GPU compute market lacks a clear price benchmark, with four major indices on Bloomberg Terminal showing significant divergence—differing by over $2 and moving inconsistently. This reveals a fragmented and inefficient market for AI compute power, particularly for H100 and B200 GPUs. While demand surges and supply constraints are real, pricing is highly opaque and varies by contract type, provider, and location. The absence of standardized contracts, reliable benchmarks, and liquid secondary markets leads to hoarding and informal subleasing. Key issues include inconsistent pricing methodologies, unstandardized agreements, quality assurance gaps, and no forward pricing mechanism. Without coordinated improvements in infrastructure and market design, establishing a true price for GPU compute remains impossible.

Author: David Lopez Mateos

Compiled by: Deep Tide TechFlow

Deep Tide Guide: The media likes to summarize the rise and fall of GPU compute prices with a single number, but the reality is: the quotes from four index providers on the Bloomberg terminal diverge by more than $2, with inconsistent directions and rhythms. The author of this article is David Lopez Mateos, founder of the GPU compute trading platform Compute Desk. Using first-hand transaction data, he breaks down the real pricing structure of H100 and B200, revealing a primitive market with no consensus benchmark, no standard contracts, and no forward curve—compute power is being hoarded and sublet like short-term rental apartments.

Media headlines would make you think GPU compute prices are soaring. This narrative is comfortable, perfectly fitting into the macro framework of "supply crunch + bottomless AI demand," and it implies something reassuring: we have a well-functioning market with clear and readable price signals.

But we don't. This narrative is almost entirely built on a single index, and it implies something that shouldn't be implied: the GPU rental market has become efficient enough to be represented by a single number.

The supply crunch is real, but the crunch felt by different people is completely different—depending on who you are, where you are, what contract you're trading, and what compute asset. Faced with this opacity, the market's natural reaction is not orderly price discovery but hoarding: locking in GPU time you might not even need yet, because you're not sure if you can get it at any price next month. Where there is hoarding and no transparent benchmark, fragmented secondary markets emerge. At Compute Desk, we have already facilitated tenants subletting their clusters like apartments during major events. This is not a hypothesis; it is happening.

Indices Do Not Converge

In mature commodity markets, indices built on different methodologies tend to converge. Brent crude and WTI have a few dollars of spread due to geographic location and crude quality, but they move in sync directionally (Figure 1). This convergence is a hallmark of an efficient market.

Caption: Comparison of Brent and WTI crude oil price trends, showing high directional alignment

There are now three GPU pricing index providers on the Bloomberg terminal: Silicon Data, Ornn AI, and Compute Desk. SemiAnalysis just released a fourth—a monthly H100 one-year contract price index based on survey data from over 100 market participants. Silicon Data and Ornn publish daily H100 rental indices, Compute Desk aggregates data at the Hopper architecture level, and SemiAnalysis captures negotiated contract prices rather than listed or crawled prices. Different methodologies, different frequencies, different angles of insight into the same market. Overlaying them reveals clear divergence (Figure 2).

Caption: Overlay comparison of four GPU indices, showing significant divergence in price levels and trends

Where Exactly Is the Price Increase Happening

Using Compute Desk data, we can break down H100 price changes by supplier type and contract structure, and overlay Silicon Data's SDH100RT index (Figure 3). All indicators show prices rising, but the starting points and magnitudes vary greatly depending on the index and contract type.

Caption: H100 price trends by contract type overlaid with the SDH100RT index

Compute Desk's H100 neocloud data tells a more specific story than the aggregate index. On-demand pricing was relatively stable throughout the winter, around $3.00/hour, then surged sharply to $3.50 in March. Spot pricing was noisier and lower, with only a slight upward trend until March. Silicon Data's SDH100RT shows a smoother, steady rise, increasing from $2.00 to $2.64 over the same period. The two indices remain at different price levels and describe different time rhythms: Compute Desk shows a March spike, Silicon Data shows a slow climb.

One-year reserved pricing was largely flat until February, then jumped sharply from $1.90 to $2.64 at the end of March—not a gradual catch-up, but a sudden repricing. This looks more like suppliers collectively adjusting contract rates after the on-demand market tightened, rather than a continuous structural demand driver.

The March story for B200 is even more dramatic (Figure 4). Compute Desk's on-demand index exploded from $5.70 to over $8.00 within weeks. Silicon Data's SDB200RT surged from $4.40 to $6.11 before falling back to $5.47. Both indices recorded this move, but the starting points differed by over $2, and the shapes of the rise and fall were different. B200 has less than five months of data, fewer suppliers, and larger spreads; the two indices are viewing the same event through very different lenses.

Caption: B200 on-demand and reserved price trends, overlaid with Compute Desk and Silicon Data data

Infrastructure Issues, Not Just Geographic Differences

Commodity markets have basis differentials. Appalachian natural gas is a textbook case: massive reserves sit atop structurally constrained pipeline capacity, with utilization rates in the Pennsylvania-Ohio corridor often exceeding 100%, and new projects like the Borealis Pipeline not coming online until the late 2020s.

The GPU market has a similar situation: an H100 in Virginia and an H100 in Frankfurt are not the same economic good. But geographic differences alone cannot explain why indices measuring the same market diverge so much. The dislocation in the GPU market is deeper than in Appalachian natural gas. The problem with natural gas is a single missing link: pipeline capacity connecting supply and demand. The infrastructure gap in the compute market exists on both the supply and demand sides. Physical infrastructure—the consistent networks, predictable configurations, and predictable availability needed for reliable compute distribution—is not yet mature and sometimes simply doesn't work. Financial infrastructure—standardized contracts that compress spreads despite physical differences, transparent benchmarks, arbitrage mechanisms—also doesn't exist yet.

The data tells one story. The real, painful experience of trying to procure compute in early 2026 tells another. On-demand capacity for all GPU types is virtually sold out. Finding 64 H100s is difficult: Compute Desk shows 90% of suppliers have zero on-demand cluster availability, and the reserved market isn't much better. In a well-functioning market, this level of scarcity would have pushed prices to a new equilibrium. But it hasn't. This suggests suppliers themselves lack real-time pricing intelligence to adjust. Prices are rising, but too slowly to clear the market. The gap between listed prices and true willingness to pay is being filled by hoarding, subletting, and informal secondary market transactions.

What Needs to Change

The current GPU compute market has seven core problems:

No consensus benchmark. Multiple indices coexist with different methodologies and contradictory conclusions.

Aggregate narratives mask structure. A single "H100 price" number masks huge differences between supplier types and contract terms.

Lack of transaction-level data. In bilateral markets, the deviation between listed prices and actual transaction prices is very large.

No contract standardization. Most GPU rentals are bilaterally negotiated with varying terms. Shorter, more standardized contract terms would improve liquidity and price discovery.

No delivery quality guarantee. Interconnect topology, CPU pairing, network stack, and uptime vary enormously. Buyers need to know the quality of the compute they are purchasing before committing.

Contracts lack liquidity. If demand changes during a reservation, options are limited: either eat the cost or sublet informally. The market needs infrastructure to transfer or resell committed compute, allowing capacity to flow to those who need it most.

No forward curve. Without the ability to price forwards, there is no hedging. This is why lenders discount GPU collateral by 40%-50%, keeping financing costs high.

Building a properly functioning market for the most important commodity of the century cannot advance on just one front. Measurement, standardization, contract structure, delivery quality, liquidity—these must advance in sync. Until then, no one can truly say what a GPU hour is worth.

Пов'язані питання

QWhy do the four GPU pricing indices on the Bloomberg terminal show significant discrepancies?

AThe indices differ because they use different methodologies, frequencies, and data sources. Some track daily H100 rental rates, others aggregate data at the Hopper architecture level, and one uses survey-based monthly contract prices. This lack of standardization leads to inconsistent price levels and trends.

QWhat does the divergence between GPU indices indicate about the current state of the compute market?

AThe divergence indicates an inefficient and immature market. Unlike mature commodity markets where indices converge, the GPU market lacks a consensus benchmark, standardized contracts, and transparent pricing, leading to fragmented and unreliable price signals.

QHow did H100 pricing behave across different contract types according to Compute Desk data?

AOn-demand pricing was stable at around $3.00/hour until March, then spiked to $3.50. Spot pricing was lower and noisier, with a slight uptrend in March. One-year reserved pricing was flat until late March, then jumped sharply from $1.90 to $2.64, indicating a sudden repricing rather than gradual demand growth.

QWhat infrastructure gaps are exacerbating the fragmentation in the GPU compute market?

ABoth physical and financial infrastructure are underdeveloped. Physically, inconsistent network reliability, configuration, and availability hinder uniform delivery. Financially, there are no standardized contracts, transparent benchmarks, or arbitrage mechanisms to compress price differences across regions and providers.

QWhat are the core problems preventing the GPU compute market from functioning efficiently?

AKey issues include: no consensus benchmark, aggregated narratives masking structural differences, lack of transaction-level data, absence of contract standardization, unguaranteed delivery quality, illiquid contracts with no resale mechanisms, and no forward curve for hedging or financing.

Пов'язані матеріали

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

The pre-IPO stock token market is experiencing significant turmoil following strong statements from AI giants Anthropic and OpenAI. Both companies have updated their official policies, declaring that any transfer of their company shares—including sales, transfers, or assignments of share interests—without prior board approval is "invalid" and will not be recognized in their corporate records. This means buyers in such unauthorized transactions would not be recognized as shareholders and would have no shareholder rights. A major point of contention is the use of Special Purpose Vehicles (SPVs), which are legal entities commonly used by pre-IPO token platforms to pool investor funds and indirectly acquire shares from employees or early investors. The companies explicitly state they do not permit SPVs to acquire their shares, and any such transfer violates their restrictions. They warn that third parties selling shares through SPVs, direct sales, forward contracts, or stock tokens are likely engaged in fraud or are offering worthless investments due to these transfer limits. This stance directly threatens the core model of many pre-IPO token platforms, which rely on SPV structures. The announcement revealed additional risks within this model, such as complex "SPV-within-SPV" layering that obscures legal transparency, increases management fees, and creates a chain reaction risk of invalidation. Following the news, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). The market reaction highlights a divergence: while asset-backed pre-IPO tokens plummeted, purely speculative pre-IPO futures contracts, which are bilateral bets on future IPO prices with no claim to actual shares, remained relatively stable as they are unaffected by the transfer restrictions. The industry is split on the implications. Some believe the fundamental logic of pre-IPO token trading is broken if leading companies reject SPV-held shares, potentially causing a domino effect. Others, like Rivet founder Nick Abouzeid, argue that buyers of such unofficial tokens always knowingly accepted the risk of non-recognition by the company. The statements serve as a stark risk warning and a corrective measure for a market where valuations for some AI-related pre-IPO tokens had soared to irrational levels, far exceeding recent funding round valuations.

marsbit40 хв тому

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

marsbit40 хв тому

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

The pre-IPO token market has been rocked by strong statements from Anthropic and OpenAI. Both AI giants have updated official warnings, declaring that any sale or transfer of their company shares without explicit board approval is "invalid" and will not be recognized on their corporate records. This directly targets Special Purpose Vehicles (SPVs), the common legal structure used by pre-IPO token platforms. These platforms typically use an SPV to acquire shares from employees or early investors, then issue blockchain-based tokens representing a claim on the SPV's economic benefits. Anthropic and OpenAI's position means that if an SPV's share purchase lacked authorization, the underlying asset could be deemed worthless, nullifying the token's value. Anthropic explicitly warned that any third party selling its shares—via direct sales, forwards, or tokens—is likely fraudulent or offering a valueless investment. The crackdown highlights risks in the popular SPV model, including complex multi-layered "Russian doll" SPV structures that obscure legal ownership, add fees, and concentrate risk. If one layer is invalidated, the entire chain could collapse. Following the announcements, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). In contrast, purely speculative pre-IPO prediction contracts remained stable, as they involve no actual share ownership. The move is seen as a corrective measure amid a market frenzy where some pre-IPO token valuations (e.g., Anthropic's token hitting a $1.4 trillion implied valuation) far exceeded recent official funding rounds. Opinions are split: some believe this undermines the core logic of pre-IPO token trading if top companies reject SPVs, while others argue buyers always assumed this legal risk when accessing unofficial channels. The statements serve as a stark warning and a potential catalyst for market de-leveraging and clearer boundaries.

Odaily星球日报44 хв тому

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

Odaily星球日报44 хв тому

The Waged Worker Driven to Poverty by AI Subscriptions

"AI Membership: The Hidden Cost Pushing Workers Toward 'Poverty'" The widespread corporate push for AI adoption is creating a hidden financial burden for employees. Companies, from giants like Alibaba to small firms, are mandating AI use, often tying token consumption to KPIs, but frequently refuse to cover the costs. Workers are forced to pay for subscriptions out of pocket to stay competitive and avoid being replaced. Front-end developer Long Shen spends up to 2000 RMB monthly on tools like Cursor and ChatGPT Plus, seeing it as a necessary 3% salary investment to handle 90% of his coding tasks. While it boosted his performance and led to promotions, he now faces idle time at work, pretending to be busy. Designer Peng Peng navigates strict company firewalls by using personal devices and accounts for AI image generation tools like Midjourney, spending hundreds monthly without reimbursement, while her boss demands faster, more numerous revisions. The pressure creates workplace anxiety and suspicion. Programmer Li Huahua, after a friend's experience of raised KPIs following AI success, fears being branded a "traitor" for using it yet worries about falling behind if she doesn't. The dynamic allows management to demand results without understanding the tools or covering expenses, treating employees like AI "agents." While some, like entrepreneur Jin Tu, find high value in paid AI, building entire systems and winning competitions, for most, it's a trap. Free tools like Kimi and Doubao are introducing fees, closing off alternatives. The initial efficiency gains individual advantage, but as AI becomes ubiquitous, the personal edge disappears, workloads increase, and a cycle of dependency begins. Workers like Long Shen realize they cannot maintain AI-generated code without AI, making stopping harder than continuing to pay. The tool promising liberation is instead becoming a compulsory, costly chain in the modern workplace.

marsbit1 год тому

The Waged Worker Driven to Poverty by AI Subscriptions

marsbit1 год тому

SK Hynix's Trillion-Won Empire: The Successors

"SK Hynix's Trillion-Won Empire and Its Heirs" explores the unconventional succession narrative within SK Group, South Korea's second-largest conglomerate, following SK Hynix's dramatic market rise. Unlike traditional chaebol scripts prioritizing the eldest son, ownership, and political marriages, Chairman Choi Tae-won's three children from his first marriage are charting distinct paths. The eldest daughter, Choi Yun-jeong, is considered the most visible candidate. With a background in biology, consulting, and a PhD, she holds executive roles at SK Bioscience and SK Inc.'s growth strategy unit, focusing on biopharma and new businesses. Her marriage is to an AI infrastructure entrepreneur, not a traditional chaebol heir. The second daughter, Choi Min-jeong, took a unique route by voluntarily serving as a South Korean naval officer, including a tour in the Gulf of Aden. She later worked on policy and strategy for SK Hynix in Washington D.C. before co-founding an AI-driven healthcare startup in San Francisco. She married a former U.S. Marine Corps officer, connecting the family to U.S. defense and policy networks. The son, Choi In-geun, who has Type 1 diabetes, followed a more classic preparatory path with a physics degree and a stint at SK E&S but left to join McKinsey's Seoul office. He remains publicly silent and holds no SK shares, defying the traditional "crown prince" archetype. Their paths unfold against the backdrop of their parents' high-profile, contentious divorce and a record-setting asset division lawsuit. The article argues that as SK Hynix becomes a geopolitical asset in the AI era, the conventional rules of chaebol inheritance are changing. The heirs are being groomed not simply to take over, but to navigate a complex global landscape defined by AI, biotech, geopolitics, and policy, forging legitimacy through their own expertise and networks rather than birth order alone.

marsbit1 год тому

SK Hynix's Trillion-Won Empire: The Successors

marsbit1 год тому

Торгівля

Спот
Ф'ючерси
活动图片