IOSG:给预测市场浇盆冷水

深潮Publicado a 2025-12-01Actualizado a 2025-12-02

预测市场依赖的是现实世界中有限且离散的事件,相对于交易而言,它是低频的。

撰文:Jiawei

预测市场(Prediction Market)无疑是加密行业最受瞩目的赛道之一。龙头项目 Polymarket 有着超 360 亿美元的累计交易量,并在近期以 90 亿美金估值完成了战略轮融资。与此同时,包括 Kalshi (估值 110 亿美金)在内的平台也相继获得巨额资本加注。

Source: Dune

然而,在资本的持续涌入和亮眼的数据增长背后,我们会发现预测市场作为一种交易类产品,仍然面临着许多问题。

在本文中,笔者试图抛开主流的乐观情绪,提供一些不同视角的观察。

预测是基于事件的——事件本质上是非连续的、不可复制的。相对股票、外汇等资产的价格随着时间变化而言,预测市场依赖的是现实世界中有限且离散的事件。相对于交易而言,它是低频的。

现实世界中真正具有广泛关注度、明确结果且在合理周期内结算的事件很有限——总统选举四年一次、世界杯四年一次、奥斯卡每年一次等等。

大多数社会、政治、经济、科技事件并不具备持续性的交易需求。这类事件每年数量有限、频率过低,很难构建起稳定的交易生态。

换句话说,预测市场的低频性,不是产品设计或激励机制可以轻易改变的。这一底层特性决定了预测市场在没有重大事件时,交易量必然不会保持在高水平。

预测市场并不像股票市场那样存在基本面:股票市场的价值来源是公司的内在价值,包括其未来现金流、盈利能力、资产等等。而预测市场最终指向于一个结果,依赖于用户「对事件结果本身的兴趣」。

(当然,这里讨论产品的初衷,排除客观的套利和投机因素等等;即便是股票市场中也存在大量投机者,他们不一定关心底层资产的本质)

在这个背景下,人们愿意下注的金额,与事件的重要性、市场的关注度和时间周期存在显著正相关:总决赛、总统选举等稀缺、且高关注的事件会吸引大量资金与注意力。

理所当然地,一个普通球迷大概率会更关心年度总决赛的结果,并为此下重注,而不太可能在常规赛中有这样的表现。

在 Polymarket 上,2024 总统大选事件就占了平台总 OI 的 70% 以上。与此同时,绝大多数事件长期处于低流动性、高买卖价差的状态。从这个层面上看,预测市场的规模很难指数级扩展。

预测市场本身存在博彩性质,但难以产生博彩那样的留存和扩张。

我们都知道,真正的博彩成瘾机制在于即时反馈——老虎机几秒钟一次,德州扑克几分钟一局,合约和 memecoin 交易每分每秒都在快速变化。

而预测市场的反馈周期很长,大多数事件需要数周至数月才能结算。如果是快速反馈的事件,又未必足够有意思,值得下重注。

即时正反馈会显著提升多巴胺释放频率,强化用户的使用习惯。延迟反馈则无法形成稳定的用户留存。

在一些类型的事件中,参与者之间的信息高度不对称。

对于竞技运动类的事件,除了队伍间的纸面实力之外,很大程度上还依赖于运动员的临场发挥,因此还存在较大的不确定性。

但对于政治事件来说则涉及内部信息、渠道、人脉等黑箱过程,内部人士具有极大的信息优势,他们下注的确定性就高得多了。

好比选举中的计票过程、内部民调、关键区域的组织情况,外部参与者就很难获得这些信息。目前,尚未看到监管主体尚对预测市场的「内幕交易」做出明确界定,这部分还是一个灰色地带。

总的来说,对于这类事件,处于信息劣势的一方很容易成为退出流动性。

由于存在语言和定义的模糊性,预测市场的事件本身也很难做到完全客观。

例如:「俄乌在 2025 年是否停火」取决于使用哪个统计口径;「加密货币 ETF 在某时某刻是否获得通过」,其中还有完全通过、部分通过、或者有条件通过,等等。这就涉及到「社会共识」的问题——在两方势均力敌的情况下,被判败方不会老老实实地认输。

这样的模糊性就要求平台建立一个争议解决机制。而预测市场一旦触及语言模糊性与争议解决,就无法完全依赖自动化或客观化,存在人为操作和腐败的空间。

市场上关于预测市场的主要价值主张是「群体智慧」,即优于对媒体和主流话语权的低信任度,预测市场可以汇集全球范围内最优质的信息,从而实现群体共识。

然而,在预测市场达到极大规模采用之前,这种「信息采样」必然是片面的,样本是不够多样化的。预测市场平台的用户群体可能高度同质化。

例如,预测市场的早期阶段,它肯定是一个主要由加密货币用户组成的平台,他们对政治、社会、经济事件的看法可能高度趋同,从而形成信息茧房。

在这种情况下,市场反映的是某个特定圈层的集体偏见,距离「群体智慧」还有相当一段距离。

结语

本文的核心并非唱空预测市场,而是希望我们可以在 FOMO 情绪高涨之际保持冷静,尤其是在经历过 ZK、GameFi 等热门叙事的起起落落之后。

过度依赖大选这样的特殊事件、社交媒体的短期情绪与空投激励,往往会放大数据表象,还不足以支撑对长期增长的判断。

尽管如此,从用户教育与用户引流的角度来看,预测市场在未来三至五年内依然具备重要位置。与链上收益类储蓄产品类似,它们有着直观的产品形式和较低的学习成本,比链上交易类协议更有机会吸引圈外用户进入加密生态。基于这一点,预测市场大概率会进一步发展,并在一定程度上成为加密行业的入口级产品。

未来的预测市场还可能会占据某些垂直领域,如体育、政治等。它们会继续存在并扩张,但短期内并不具备指数级增长的基础条件。我们更应以一种审慎乐观的视角去思考预测市场的投资。

Lecturas Relacionadas

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbitHace 1 hora(s)

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbitHace 1 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbitHace 1 hora(s)

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbitHace 1 hora(s)

Trading

Spot
Futuros
活动图片