为什么 Hyperliquid 越火 ,Arbitrum 就越能「闷声发大财」?

marsbitPublished on 2025-12-03Last updated on 2025-12-04

最近, @HyperliquidX HIP3协议爆火,股票Perp、黄金Perp,甚至Pokémon卡片、CS饰品皮肤等都能上线交易了,这让Hyperliquid一时风光无两,但很多人却忽略了, @arbitrum_cn 的流动性也在过去实现了大暴涨。

没错,Hyperliquid 越火 ,Arbitrum 就越能“闷声发大财”?为什么这么说?

1)一个基本事实,Hyperliquid持有的USDC,大部分都要从Arbitrum桥接过来,每当Hyperliquid上线一个TSLA股票合约、一个黄金perp,背后都是海量USDC从Arbitrum涌入。这种关联不是“顺带”,而是结构性依赖。

这些桥接活动,直接贡献了Arbitrum的日交易量和生态活性,推动Arbitrum持续稳坐layer2 第一把交椅;


2)当然,会有人说,Arbitrum只是Hyperliquid资金的跳板,只是资金过一道就走,那为什么Hyperliquid不选择Solana,不选择 Base,而是深度绑定了Arbitrum?原因如下:

1、技术适配成本最低:Hyperliquid需要一个EVM兼容性好的流动入口来安全承接稳定币,而Arbitrum的Nitro架构能让桥接延迟控制在1分钟内,且Gas费不足0.01美元,用户几乎感受不到摩擦成本;

2、流动性深度无可替代:Arbitrum原生USDC流通量达到了80.6亿美元,为所有layer2中最高。而且Arbitrum上有GMX、Gains等成熟协议已经形成了完整的借贷、交易、衍生品、收益聚合等闭环,本质上,Hyperliquid选择Arbitrum,不止是一个桥接通道,而是一个成熟流动性网络;

3、生态协同效应不可复制:HIP3新上线的一些股票Perp、黄金perp甚至国债代币等,在Arbitrum上早就以RWA资产的形式存在,并通过Morpho、Pendle、Euler等DeFi协议实现了借贷和farming等操作。这样一来,用户可以在Arbitrum上质押RWA资产作为抵押品,借出USDC,然后桥接到Hyperliquid开5倍甚至10倍杠杆交易股票perp。这不是资金过一道就走,而是跨生态的流动性聚合。


3)在我看来,Hyperliquid和Arbitrum之间绝非简单的流动性“寄生关系”,而是战略互补。

Hyperliquid作为Perp Dex的应用链持续激发交易活性,而Arbitrum则提供持续性的流动性输血,对于Arbitrum而言,也需要Hyperliquid这样的现象级应用来打破以太坊生态在产品张力上的不足。

这让我想起,当初Arbitrum在推Orbit layer3的框架时,主打的就是“通用layer2+专业化应用链”这张牌,Orbit允许任何团队快速部署自己的Layer3应用链,既能享受Arbitrum的安全性和流动性,又能根据业务需求定制化性能参数。

Hyperliquid虽然选择了自建layer1 +深度绑定Arbitrum的路径,看起来跟直接部署layer3不是一回事。但如果你仔细分析HIP-3生态和Arbitrum之间的关系,会发现一个有趣的结论:HIP3生某种程度上已经成了Arbitrum事实上的Layer3应用链。

毕竟,所谓layer3的核心逻辑就是,在保持自身性能优势的同时,把安全性和流动性外包给Layer2。显然,Hyperliquid暂时给不了HIP3生态的流动性优势,但Arbitrum可以。


这不就是一类变种layer3运作模式吗?

Related Reads

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit2h ago

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit2h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit2h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit2h ago

Trading

Spot
Futures
活动图片