仅10条公链周收入超10万美元:退潮后集体裸泳

marsbitPubblicato 2025-12-02Pubblicato ultima volta 2025-12-03

链上数据平台 Nansen 新推出的 7 天协议收入追踪功能显示,在众多宣称"下一代"的公链中,只有 10 条在过去一周的协议收入超过 10 万美元。而除了最近上线的 Monad,其他从去年年底以来上线的 Movement、Berachain、Somnia 等高融资的公链项目,现在每日收入甚至不到四位数。

让我们一起来看看详细数据。


头部集中效应明显

根据 Nansen 平台新推出的公链看板,过去 7 天协议收入排名呈现出极度的头部集中特征。Tron(波场)以 656 万美元的周费用收入遥遥领先,尽管其费用增长率仅为 0.4%,但牢牢占据榜首位置。可以看出,波场在稳定币转账和支付场景方面,仍然具有强大的统治力。紧随其后的是Solana,周费用收入达到 317 万美元,以超过 1,551 万个活跃地址和 4.15 亿笔交易,成为交易数量最大的公链,在熊市高回撤的背景下,其费用收入仍保持 2%的正增长。

传统巨头以太坊周费用收入为 268 万美元,排名第三。值得注意的是,以太坊的活跃地址数激增 20%,交易量增长 4.1%,但费用收入却暴跌 54%,可能是来源于熊市的影响。

BNB Chain 以 262 万美元的周费用收入位列第四,比特币周费用收入为 168 万美元,排名第五,Base 链以 53.26 万美元的周费用收入位列第六。仅它们六家就贡献了总计约 1,724 万美元的周费用收入,占据了整个区块链生态系统用户支出费用的绝大部分。

具体有多大呢?根据 Nansen 统计到的所有链的数据,除头部六家外所有链的总收入一共才大约$1,059,000(106 万美元)。也就是说,头部六家公链的费用收入是剩下所有链之和的 16 倍以上。


腰部公链勉强维持

排名 7 到 11 位的公链包括 HyperEVM、Polygon、Monad、Arbitrum 和 Avalanche,它们的周费用收入在 7.56 万美元至 20.48 万美元之间。HyperEVM 周费用收入为 20.48 万美元,但费用下降 49%,显示出明显的降温趋势。Polygon 周费用收入 18.31 万美元,活跃地址数和交易量分别增长 15%和 10%,但费用收入仍下降 23%。

Avalanche 则已经跌出了周收入达到 10 万美元的门槛。

虽然这些公链勉强达到或接近 10 万美元周收入的门槛,但与头部公链相比仍存在数量级的差距,且大多数链的费用收入呈现负增长态势,受熊市影响出现普跌。


大量低收入公链充斥市场

排名 12 位之后的公链周费用收入急剧下降。许多中小公链不仅面临费用收入的大幅下滑,活跃地址数和交易量也在持续萎缩。

Bitlayer、Starknet、Linea 等链的周费用都在 2.55 万至 3.73 万美元之间,且大多数呈现两位数的负增长。 Aptos 作为曾被寄予厚望的高性能公链,周费用收入仅为 1.25 万美元,费用收入下降 5.8%。 一些知名的 Layer 2 解决方案同样面临困境。ZKsync 周费用收入仅 6,480 美元,交易量暴跌 40%,费用收入暴跌 47%,呈现出全面崩盘的态势。Plasma 的数据更加触目惊心,尽管周费用收入还有 5,240 美元,但其交易量暴跌 79%,费用收入下降 60%。 Scroll、Sonic、Ronin、Sei 等链的周费用收入都在 2,000 至 3,500 美元之间徘徊。

这些数据意味着这些公链每天的费用收入甚至不足 500 美元。

而相对而言更新的公链来说,比如 Somnia、Berachain 和 Movement,根据 DeFiLlama 的数据,Somnia 日收入 193 美元,Bera 为 45 美元,Movement 仅有 3 美元的收入(约 20 人民币),30 天收入仅为 92 美元(约合 650 人民币)。

这些数据背后反映的是残酷的市场现实:大量公链消耗了巨额的风险投资、吸引了众多开发者投入时间精力,但最终未能建立起真正有价值的用户生态。在熊市期间,在用户用真金白银投票的链上费用市场中,它们的存在感已经微乎其微。

Letture associate

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit1 h fa

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit1 h fa

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit1 h fa

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit1 h fa

Trading

Spot
Futures
活动图片