GPUs Have No 'Price': Four Indices Clash, the Compute Market Is More Chaotic Than You Think

marsbitОпубликовано 2026-04-08Обновлено 2026-04-08

Введение

The GPU compute market lacks a clear price benchmark, with four major indices on Bloomberg Terminal showing significant divergence—differing by over $2 and moving inconsistently. This reveals a fragmented and inefficient market for AI compute power, particularly for H100 and B200 GPUs. While demand surges and supply constraints are real, pricing is highly opaque and varies by contract type, provider, and location. The absence of standardized contracts, reliable benchmarks, and liquid secondary markets leads to hoarding and informal subleasing. Key issues include inconsistent pricing methodologies, unstandardized agreements, quality assurance gaps, and no forward pricing mechanism. Without coordinated improvements in infrastructure and market design, establishing a true price for GPU compute remains impossible.

Author: David Lopez Mateos

Compiled by: Deep Tide TechFlow

Deep Tide Guide: The media likes to summarize the rise and fall of GPU compute prices with a single number, but the reality is: the quotes from four index providers on the Bloomberg terminal diverge by more than $2, with inconsistent directions and rhythms. The author of this article is David Lopez Mateos, founder of the GPU compute trading platform Compute Desk. Using first-hand transaction data, he breaks down the real pricing structure of H100 and B200, revealing a primitive market with no consensus benchmark, no standard contracts, and no forward curve—compute power is being hoarded and sublet like short-term rental apartments.

Media headlines would make you think GPU compute prices are soaring. This narrative is comfortable, perfectly fitting into the macro framework of "supply crunch + bottomless AI demand," and it implies something reassuring: we have a well-functioning market with clear and readable price signals.

But we don't. This narrative is almost entirely built on a single index, and it implies something that shouldn't be implied: the GPU rental market has become efficient enough to be represented by a single number.

The supply crunch is real, but the crunch felt by different people is completely different—depending on who you are, where you are, what contract you're trading, and what compute asset. Faced with this opacity, the market's natural reaction is not orderly price discovery but hoarding: locking in GPU time you might not even need yet, because you're not sure if you can get it at any price next month. Where there is hoarding and no transparent benchmark, fragmented secondary markets emerge. At Compute Desk, we have already facilitated tenants subletting their clusters like apartments during major events. This is not a hypothesis; it is happening.

Indices Do Not Converge

In mature commodity markets, indices built on different methodologies tend to converge. Brent crude and WTI have a few dollars of spread due to geographic location and crude quality, but they move in sync directionally (Figure 1). This convergence is a hallmark of an efficient market.

Caption: Comparison of Brent and WTI crude oil price trends, showing high directional alignment

There are now three GPU pricing index providers on the Bloomberg terminal: Silicon Data, Ornn AI, and Compute Desk. SemiAnalysis just released a fourth—a monthly H100 one-year contract price index based on survey data from over 100 market participants. Silicon Data and Ornn publish daily H100 rental indices, Compute Desk aggregates data at the Hopper architecture level, and SemiAnalysis captures negotiated contract prices rather than listed or crawled prices. Different methodologies, different frequencies, different angles of insight into the same market. Overlaying them reveals clear divergence (Figure 2).

Caption: Overlay comparison of four GPU indices, showing significant divergence in price levels and trends

Where Exactly Is the Price Increase Happening

Using Compute Desk data, we can break down H100 price changes by supplier type and contract structure, and overlay Silicon Data's SDH100RT index (Figure 3). All indicators show prices rising, but the starting points and magnitudes vary greatly depending on the index and contract type.

Caption: H100 price trends by contract type overlaid with the SDH100RT index

Compute Desk's H100 neocloud data tells a more specific story than the aggregate index. On-demand pricing was relatively stable throughout the winter, around $3.00/hour, then surged sharply to $3.50 in March. Spot pricing was noisier and lower, with only a slight upward trend until March. Silicon Data's SDH100RT shows a smoother, steady rise, increasing from $2.00 to $2.64 over the same period. The two indices remain at different price levels and describe different time rhythms: Compute Desk shows a March spike, Silicon Data shows a slow climb.

One-year reserved pricing was largely flat until February, then jumped sharply from $1.90 to $2.64 at the end of March—not a gradual catch-up, but a sudden repricing. This looks more like suppliers collectively adjusting contract rates after the on-demand market tightened, rather than a continuous structural demand driver.

The March story for B200 is even more dramatic (Figure 4). Compute Desk's on-demand index exploded from $5.70 to over $8.00 within weeks. Silicon Data's SDB200RT surged from $4.40 to $6.11 before falling back to $5.47. Both indices recorded this move, but the starting points differed by over $2, and the shapes of the rise and fall were different. B200 has less than five months of data, fewer suppliers, and larger spreads; the two indices are viewing the same event through very different lenses.

Caption: B200 on-demand and reserved price trends, overlaid with Compute Desk and Silicon Data data

Infrastructure Issues, Not Just Geographic Differences

Commodity markets have basis differentials. Appalachian natural gas is a textbook case: massive reserves sit atop structurally constrained pipeline capacity, with utilization rates in the Pennsylvania-Ohio corridor often exceeding 100%, and new projects like the Borealis Pipeline not coming online until the late 2020s.

The GPU market has a similar situation: an H100 in Virginia and an H100 in Frankfurt are not the same economic good. But geographic differences alone cannot explain why indices measuring the same market diverge so much. The dislocation in the GPU market is deeper than in Appalachian natural gas. The problem with natural gas is a single missing link: pipeline capacity connecting supply and demand. The infrastructure gap in the compute market exists on both the supply and demand sides. Physical infrastructure—the consistent networks, predictable configurations, and predictable availability needed for reliable compute distribution—is not yet mature and sometimes simply doesn't work. Financial infrastructure—standardized contracts that compress spreads despite physical differences, transparent benchmarks, arbitrage mechanisms—also doesn't exist yet.

The data tells one story. The real, painful experience of trying to procure compute in early 2026 tells another. On-demand capacity for all GPU types is virtually sold out. Finding 64 H100s is difficult: Compute Desk shows 90% of suppliers have zero on-demand cluster availability, and the reserved market isn't much better. In a well-functioning market, this level of scarcity would have pushed prices to a new equilibrium. But it hasn't. This suggests suppliers themselves lack real-time pricing intelligence to adjust. Prices are rising, but too slowly to clear the market. The gap between listed prices and true willingness to pay is being filled by hoarding, subletting, and informal secondary market transactions.

What Needs to Change

The current GPU compute market has seven core problems:

No consensus benchmark. Multiple indices coexist with different methodologies and contradictory conclusions.

Aggregate narratives mask structure. A single "H100 price" number masks huge differences between supplier types and contract terms.

Lack of transaction-level data. In bilateral markets, the deviation between listed prices and actual transaction prices is very large.

No contract standardization. Most GPU rentals are bilaterally negotiated with varying terms. Shorter, more standardized contract terms would improve liquidity and price discovery.

No delivery quality guarantee. Interconnect topology, CPU pairing, network stack, and uptime vary enormously. Buyers need to know the quality of the compute they are purchasing before committing.

Contracts lack liquidity. If demand changes during a reservation, options are limited: either eat the cost or sublet informally. The market needs infrastructure to transfer or resell committed compute, allowing capacity to flow to those who need it most.

No forward curve. Without the ability to price forwards, there is no hedging. This is why lenders discount GPU collateral by 40%-50%, keeping financing costs high.

Building a properly functioning market for the most important commodity of the century cannot advance on just one front. Measurement, standardization, contract structure, delivery quality, liquidity—these must advance in sync. Until then, no one can truly say what a GPU hour is worth.

Связанные с этим вопросы

QWhy do the four GPU pricing indices on the Bloomberg terminal show significant discrepancies?

AThe indices differ because they use different methodologies, frequencies, and data sources. Some track daily H100 rental rates, others aggregate data at the Hopper architecture level, and one uses survey-based monthly contract prices. This lack of standardization leads to inconsistent price levels and trends.

QWhat does the divergence between GPU indices indicate about the current state of the compute market?

AThe divergence indicates an inefficient and immature market. Unlike mature commodity markets where indices converge, the GPU market lacks a consensus benchmark, standardized contracts, and transparent pricing, leading to fragmented and unreliable price signals.

QHow did H100 pricing behave across different contract types according to Compute Desk data?

AOn-demand pricing was stable at around $3.00/hour until March, then spiked to $3.50. Spot pricing was lower and noisier, with a slight uptrend in March. One-year reserved pricing was flat until late March, then jumped sharply from $1.90 to $2.64, indicating a sudden repricing rather than gradual demand growth.

QWhat infrastructure gaps are exacerbating the fragmentation in the GPU compute market?

ABoth physical and financial infrastructure are underdeveloped. Physically, inconsistent network reliability, configuration, and availability hinder uniform delivery. Financially, there are no standardized contracts, transparent benchmarks, or arbitrage mechanisms to compress price differences across regions and providers.

QWhat are the core problems preventing the GPU compute market from functioning efficiently?

AKey issues include: no consensus benchmark, aggregated narratives masking structural differences, lack of transaction-level data, absence of contract standardization, unguaranteed delivery quality, illiquid contracts with no resale mechanisms, and no forward curve for hedging or financing.

Похожее

Borrowing Money from a Hundred Years Later, Building Incomprehensible AI

Tech giants like Alphabet, Amazon, Meta, and Microsoft are undergoing a radical financial transformation due to AI. Their traditional "light-asset, high-free-cash-flow" model is being dismantled by staggering capital expenditures on AI infrastructure—data centers, GPUs, and power. Combined 2026 guidance exceeds $700 billion, a 4.5x increase from 2022, causing free cash flow to plummet (e.g., Amazon's fell 95%). To fund this, they are borrowing unprecedented sums through long-dated, multi-currency bonds (e.g., Alphabet's 100-year bond). The world's most conservative capital—pensions, insurers—is now funding Silicon Valley's most speculative bet. This shift makes these companies resemble heavy-asset industrials (railroads, utilities) rather than software firms, threatening their premium valuations. Historically, such infrastructure booms (railroads, fiber optics) followed a pattern: genuine technology, overbuilding fueled by competitive frenzy, aggressive debt financing, and a crash triggered by financial conditions—not technology failure. The infrastructure remained, but many original builders and financiers did not survive. The core gamble is a "time arbitrage": using cheap debt today to build scale and lock in customers before AI capabilities commoditize. They are betting that AI revenue will materialize before debt comes due. Their positions vary: Amazon is under immediate cash pressure; Meta's path to monetization is unclear; Alphabet has a robust core business buffer; Microsoft has the shortest path from infrastructure to revenue. The contract is set: the most risk-averse global capital has lent its time to Silicon Valley, awaiting a future that is promised but uncertain.

marsbit30 мин. назад

Borrowing Money from a Hundred Years Later, Building Incomprehensible AI

marsbit30 мин. назад

The 'VVV' Concept Soars 9x in Half a Year, The New AI Narrative on Base Chain

"The article explores the 'VVV' concept as the new AI-focused narrative within the Base ecosystem, centered around the token $VVV of the privacy-focused, uncensored generative AI platform Venice, led by crypto veteran Erik Voorhees. Venice has seen significant growth in 2026, with its API users surging, partly attributed to exposure from OpenClaw. The platform now boasts over 2 million total users and 55,000 paid subscribers. Correspondingly, the $VVV token price has risen over 9x this year. Key to its performance are tokenomics designed for value accrual: reduced annual emissions, subscription revenue used for buyback-and-burn, and a unique staking mechanism. Staking $VVV yields $sVVV, which can be used to mint $DIEM tokens. Each staked $DIEM provides a daily $1 credit for using Venice's API services, creating tangible utility. The article also highlights other tokens associated with the 'VVV' narrative. $POD, the token of distributed AI network Dolphin (which co-developed Venice's default AI model), saw a massive price surge. $cyb3rwr3n, a project for a Venice credit auction market, gained attention due to perceived connections to Venice's team despite official denials. Finally, $SR of robotics platform STRIKEROBOT.AI rose after announcing a partnership with Venice for robot vision-language model development. Overall, the 'VVV' ecosystem combines AI platform growth, deflationary tokenomics, and innovative utility mechanisms, driving significant investor interest and price action in related tokens."

marsbit39 мин. назад

The 'VVV' Concept Soars 9x in Half a Year, The New AI Narrative on Base Chain

marsbit39 мин. назад

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

The pre-IPO stock token market is experiencing significant turmoil following strong statements from AI giants Anthropic and OpenAI. Both companies have updated their official policies, declaring that any transfer of their company shares—including sales, transfers, or assignments of share interests—without prior board approval is "invalid" and will not be recognized in their corporate records. This means buyers in such unauthorized transactions would not be recognized as shareholders and would have no shareholder rights. A major point of contention is the use of Special Purpose Vehicles (SPVs), which are legal entities commonly used by pre-IPO token platforms to pool investor funds and indirectly acquire shares from employees or early investors. The companies explicitly state they do not permit SPVs to acquire their shares, and any such transfer violates their restrictions. They warn that third parties selling shares through SPVs, direct sales, forward contracts, or stock tokens are likely engaged in fraud or are offering worthless investments due to these transfer limits. This stance directly threatens the core model of many pre-IPO token platforms, which rely on SPV structures. The announcement revealed additional risks within this model, such as complex "SPV-within-SPV" layering that obscures legal transparency, increases management fees, and creates a chain reaction risk of invalidation. Following the news, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). The market reaction highlights a divergence: while asset-backed pre-IPO tokens plummeted, purely speculative pre-IPO futures contracts, which are bilateral bets on future IPO prices with no claim to actual shares, remained relatively stable as they are unaffected by the transfer restrictions. The industry is split on the implications. Some believe the fundamental logic of pre-IPO token trading is broken if leading companies reject SPV-held shares, potentially causing a domino effect. Others, like Rivet founder Nick Abouzeid, argue that buyers of such unofficial tokens always knowingly accepted the risk of non-recognition by the company. The statements serve as a stark risk warning and a corrective measure for a market where valuations for some AI-related pre-IPO tokens had soared to irrational levels, far exceeding recent funding round valuations.

marsbit1 ч. назад

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

marsbit1 ч. назад

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

The pre-IPO token market has been rocked by strong statements from Anthropic and OpenAI. Both AI giants have updated official warnings, declaring that any sale or transfer of their company shares without explicit board approval is "invalid" and will not be recognized on their corporate records. This directly targets Special Purpose Vehicles (SPVs), the common legal structure used by pre-IPO token platforms. These platforms typically use an SPV to acquire shares from employees or early investors, then issue blockchain-based tokens representing a claim on the SPV's economic benefits. Anthropic and OpenAI's position means that if an SPV's share purchase lacked authorization, the underlying asset could be deemed worthless, nullifying the token's value. Anthropic explicitly warned that any third party selling its shares—via direct sales, forwards, or tokens—is likely fraudulent or offering a valueless investment. The crackdown highlights risks in the popular SPV model, including complex multi-layered "Russian doll" SPV structures that obscure legal ownership, add fees, and concentrate risk. If one layer is invalidated, the entire chain could collapse. Following the announcements, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). In contrast, purely speculative pre-IPO prediction contracts remained stable, as they involve no actual share ownership. The move is seen as a corrective measure amid a market frenzy where some pre-IPO token valuations (e.g., Anthropic's token hitting a $1.4 trillion implied valuation) far exceeded recent official funding rounds. Opinions are split: some believe this undermines the core logic of pre-IPO token trading if top companies reject SPVs, while others argue buyers always assumed this legal risk when accessing unofficial channels. The statements serve as a stark warning and a potential catalyst for market de-leveraging and clearer boundaries.

Odaily星球日报1 ч. назад

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

Odaily星球日报1 ч. назад

Торговля

Спот
Фьючерсы
活动图片