Solana Gets A Big Infra Signal As Alibaba Demos High-Performance RPCs

bitcoinistОпубликовано 2026-02-11Обновлено 2026-02-11

Введение

Alibaba Cloud demonstrated high-performance Solana RPC connectivity during a keynote, showcasing a significant infrastructure development for the blockchain. The demo highlighted a migration of a Solana archive node to Alibaba's in-house database, completed in just two days with the aid of AI-assisted coding. Key improvements included a reduction in RPC call latency; a "get slot" call dropped from 25ms to 10ms, while a "get block" call for a 4MB payload saw latency fall to under 200ms. This was achieved by routing traffic through Alibaba's private backbone network instead of the public internet, emphasizing stability and suitability for low-latency, trading-heavy applications. The company also highlighted its global data center footprint as a perfect match for Solana's needs. Although not a formal product announcement, the public benchmarking by a major cloud provider is a notable vote of confidence for the Solana ecosystem.

Solana picked up an infrastructure vote of confidence on Wednesday after Alibaba Cloud used a Hong Kong keynote to demo “high-performance” Solana RPC connectivity, framing the work as part of its broader push to fuse AI tooling with Web3 developer workflows.

In a clip shared by Solana’s official X account, the demo came during an Accelerate APAC 2026 keynote titled “Fueling Web3 Innovation with AI on Cloud,” delivered by Zhao Qingyuan of Alibaba Cloud Intelligence Group. The pitch was straightforward: reduce latency and operational overhead for builders who rely on fast, reliable RPC access, especially in trading-heavy use cases where milliseconds can matter.

Alibaba Flexes Solana RPC Throughput

Zhao framed the talk as a practical example of how large language models can compress development cycles. In the keynote, he said he recently migrated a archive node away from a Google Bigtable setup to an Alibaba Cloud in-house database implementation over a weekend, leaning on AI-assisted coding despite limited prior familiarity with Solana’s usual developer stack.

“Just this weekend, I spent two days — I migrated the Solana archive node from a Google Bigtable implementation to Alibaba Cloud’s in-house database implementation,” Zhao said. “I haven’t even learned Rust before, and I just used web coding to do this in two days. And it will download the data from Hugging Face for the historical slots, and it will synchronize the data with the mainnet, and it can provide the RPC service.”

The remarks landed alongside Alibaba Cloud’s broader messaging around its Qwen family of models, positioned by the company as a general-purpose LLM stack that can be used for coding, assistants, and multimodal workflows.

The more market-relevant part of the demo was the latency claim. Zhao described a setup where users connect to RPC nodes through Alibaba Cloud’s backbone network rather than via general public internet routes.

In a table shown during the talk, he said a “get slot” RPC call latency was reduced from roughly 25 milliseconds to about 10 milliseconds under the backbone-network approach, calling it “a huge reduction.” For “get block,” described as a 4MB block payload, he said latency fell from “more than 200 milliseconds” to “less than 200 milliseconds,” while emphasizing stability and suitability for low-latency workloads.

Alibaba Cloud also leaned into geography. Zhao pointed to the firm’s global footprint, highlighting regions such as Frankfurt, the US, and multiple Asia-Pacific hubs including Tokyo, Singapore, and Hong Kong, as a “perfect match” for Solana’s builder base and latency-sensitive applications.

While the clip stops short of announcing a formal partnership or a productized Solana RPC offering with pricing or SLAs, the optics are notable: a major cloud provider using a Solana ecosystem stage to publicly benchmark RPC latency improvements, and explicitly tying that to trading and “co-location for the high frequency calls.”

At press time, SOL traded at $81.

SOL stays above the 0.786 Fib, 1-week chart | Source: SOLUSDT on TradingView.com

Связанные с этим вопросы

QWhat did Alibaba Cloud demonstrate regarding Solana at the Accelerate APAC 2026 keynote?

AAlibaba Cloud demonstrated 'high-performance' Solana RPC connectivity, showcasing reduced latency and operational overhead for developers, particularly in trading-heavy use cases.

QHow did Zhao Qingyuan from Alibaba Cloud migrate the Solana archive node, and what was notable about his approach?

AZhao Qingyuan migrated a Solana archive node from a Google Bigtable implementation to Alibaba Cloud's in-house database over a weekend using AI-assisted coding, despite having limited prior familiarity with Rust or Solana's developer stack.

QWhat latency improvements did Alibaba Cloud claim for Solana RPC calls using their backbone network?

AFor a 'get slot' RPC call, latency was reduced from roughly 25 milliseconds to about 10 milliseconds. For a 'get block' call with a 4MB payload, latency fell from 'more than 200 milliseconds' to 'less than 200 milliseconds'.

QHow did Alibaba Cloud position its Qwen family of models in relation to this demonstration?

AAlibaba Cloud positioned its Qwen family of models as a general-purpose LLM stack that can be used for coding, assistants, and multimodal workflows, using the demo as a practical example of how AI can compress development cycles.

QWhat was the market context for SOL at the time of the article's publication?

AAt press time, SOL was trading at $81.

Похожее

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit2 ч. назад

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit2 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit3 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit3 ч. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на SOL (SOL) представлены ниже.

活动图片