Kyle Samani Is Back: This Time, We're Going to Outcompete CEX in Efficiency!

Odaily星球日报Published on 2026-03-12Last updated on 2026-03-12

Abstract

Kyle Samani, former co-founder of Multicoin Capital, has returned to advocate for PropAMM, a Solana-based innovation he claims is one of the most significant advancements in market microstructure in decades. He argues that by hosting market maker algorithms directly on-chain, PropAMM eliminates the latency inherent in traditional centralized exchanges (CEXs), where constant data exchange between market makers and the exchange server is required. On Solana, pricing updates occur within the same physical silicon, drastically reducing delays. PropAMM already dominates SOL-USDC spot trading on Solana with tighter spreads than major CEXs. Samani predicts it will become the dominant model for on-chain spot, perpetuals, and prediction markets this year. Current challenges include ensuring best execution for takers due to private algorithms and non-deterministic routing, but solutions from aggregators like Jupiter and dFlow are expected. Upcoming Solana upgrades—such as higher compute limits, reduced slot times, and lower network latency—will further enhance PropAMM performance, solidifying its efficiency advantage over CEXs.

Source:Kyle Samani

Compiled by|Odaily Planet Daily(@OdailyChina); Translator|Azuma(@azuma_eth)

Editor's Note: The man who knows best how to promote Solana, the former co-founder of Multicoin Capital Kyle Samani, who had loudly exited the crypto world not long ago, is back!

Last night, Kyle Samani posted a long Thread on his personal X account. In the article, Kyle Samani once again demonstrated his highly persuasive "shilling" (not derogatory here) rhetoric, using "efficiency"—a weak point in the decentralized narrative—as a breakthrough. He detailed how the Solana ecosystem's currently promoted PropAMM will catch up with or even surpass traditional centralized models in efficiency, arguing that PropAMM is one of the most important innovations in market microstructure in recent years, or even decades.

  • Related articles: 《The Man Who Knows Best How to Shill SOL Exits the Crypto World》; 《Is There More to Kyle Samani's Exit?》.

Below is the original content by Kyle Samani, compiled by Odaily Planet Daily.

PropAMM is one of the most important innovations in market microstructure in recent years, and perhaps even one of the most significant in decades.

To help everyone understand this conclusion, let's first look at how market makers (MMs) quote prices on traditional centralized exchanges (CEX).

Market makers typically engage in physical co-location with the exchange. Each market maker runs an algorithm on a server and connects to another server running the exchange's system via a network cable of uniform length (e.g., 50 meters).

A massive stream of data is constantly exchanged back and forth between market makers and the exchange. Whenever a market maker sends an order to the exchange—whether it's a limit order, cancellation, or market order—the exchange must broadcast this information to all other market makers; then, the other market makers resend their own orders based on the new information; this cycle repeats indefinitely.

Here is a simple diagram.

Now, let's look at how propAMM works on the Solana mainnet.

The beauty of propAMM on Solana is that the blockchain itself directly "hosts" the market maker algorithms. This means the system no longer needs to send billions of messages back and forth between market makers and the exchange; the market-making algorithms will run directly on the same physical machine as the exchange.

The new diagram is as follows. (That's right, only the Solana blockchain is needed!)

There has long been a common view in the cryptocurrency industry that decentralized systems must be slower (have higher latency) than centralized systems because they require communication between global nodes.

But if you think about it differently, on-chain hosted algorithms could actually have lower latency than traditional centralized exchanges in finance.

Why is that? The reason is that the latency required for propAMM to update prices only involves electrons moving within the same physical piece of silicon. For example, if the last market order causes a change in the SOL-USD price, this information is immediately visible to all propAMMs and used to price the next market order. Everything happens within the same piece of silicon; there is no longer a need for two-way communication between servers.

It's worth noting that propAMM does require frequent oracle updates, but this is not a problem and does not change the overall fact I described above.

The most critical point remains that when the exchange—in this case, the Solana blockchain—directly hosts the propAMM algorithms, the market makers' pricing changes in real-time within the same physical piece of silicon.

propAMM has already become the dominant mechanism for SOL-USDC spot quotes on Solana, with narrower spreads than all major CEXs. I expect this market structure to become the dominant model for on-chain trading this year, including spot, perpetual contracts (perps), and even prediction markets.

The biggest challenge for propAMM is that there is currently no way to ensure that the taker always gets the best execution, because:

  • All propAMM algorithms are not public (which is reasonable, as traditional market-making algorithms are also proprietary);
  • When routing trades between multiple propAMMs, the result is non-deterministic.

However, this problem can be solved. I expect all relevant aggregator teams to launch solutions this year, such as Jupiter and dFlow for spot, and Phoenix for contracts.

Current propAMM is still under-optimized and subject to various limitations of the Solana blockchain itself. This year, Solana will roll out a series of major upgrades that will significantly enhance propAMM's performance, including:

  1. Higher CU (Compute Unit) limits per transaction and larger transaction sizes;
  2. Higher CU limits per block;
  3. Alpenglow: Reducing slot time from 400ms to 100–150ms;
  4. DoubleZero: Reducing global network latency;
  5. Application-controlled execution;
  6. Multiple concurrent leaders.

If propAMM on the Solana mainnet can already offer narrower quotes than all CEXs without these upgrades, imagine how powerful they will become as these upgrades are gradually implemented.

Related Questions

QWhat is PropAMM and why does Kyle Samani consider it a major innovation in market microstructure?

APropAMM is a mechanism on the Solana blockchain where the market maker algorithms are hosted directly on-chain. Kyle Samani considers it one of the most important innovations in market microstructure in recent years, or even decades, because it eliminates the need for constant bidirectional messaging between market makers and an exchange, allowing pricing updates to occur with extremely low latency on the same physical silicon.

QHow does the efficiency of PropAMM on Solana compare to traditional CEX market making?

APropAMM on Solana is more efficient than traditional CEX market making. In a CEX, market makers and the exchange servers constantly send data back and forth, creating latency. In contrast, PropAMM algorithms run on the same physical machine as the exchange (the Solana blockchain), so price updates happen almost instantaneously as electrons move within the same piece of silicon, resulting in lower latency and tighter spreads.

QWhat evidence does Kyle Samani provide to show that PropAMM is currently outperforming CEXs?

AKyle Samani states that PropAMM has become the dominant quoting mechanism for the SOL-USDC spot pair on Solana and is already achieving narrower spreads than all major centralized exchanges (CEXs).

QWhat are the current challenges facing PropAMM, according to the article?

AThe main challenges are the inability to guarantee best execution for takers because the PropAMM algorithms are not public (which is standard for proprietary trading algorithms) and the non-deterministic nature of routing trades across multiple PropAMMs.

QWhat future Solana upgrades are mentioned that will further improve PropAMM performance?

AThe article lists several upcoming Solana upgrades: higher Compute Unit (CU) limits per transaction and larger transaction sizes, a higher CU limit per block, Alpenglow (reducing slot time from 400ms to 100-150ms), DoubleZero (reducing global network latency), application-controlled execution, and multiple concurrent leaders.

Related Reads

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit3h ago

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit3h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit3h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit3h ago

Trading

Spot
Futures
活动图片