Tokens, Models, and Bubbles: The Crypto × AI Game in the Primary Market
Based on a two-year retrospective, this article analyzes the convergence of Crypto and AI from a primary market perspective. Initially, the crypto space heavily promoted "Crypto Helps AI" through three main narratives: computation power tokenization, data tokenization, and model tokenization. However, these efforts largely resulted in what the author calls a "tokenization illusion"—projects that issued tokens but lacked real product-market fit or sustainable business models.
The piece critiques these approaches: decentralized compute networks often fail to meet enterprise reliability standards; tokenized data struggles with supply-demand alignment due to low user motivation and high professional requirements; and model tokenization is fundamentally flawed since AI models are non-scarce, easily replicable, and depreciate quickly. Additionally, projects focusing on verifiable inference (like ZKML or OPML) are solutions in search of a problem, as real-world AI failures are rarely due to malicious tampering but rather design errors or misconfigurations.
The author references Vitalik Buterin’s updated views, which now present a more balanced perspective compared to two years ago. Buterin outlines four quadrants of Crypto × AI integration: two where crypto (especially Ethereum) provides trustless, economic layers for AI agents and private interactions, and two where AI enhances crypto—through local LLMs acting as user shields for security and AI improving market efficiency and DAO governance.
The conclusion emphasizes that meaningful progress lies at the intersection of both fields, beyond mere tokenization or speculative narratives, and expresses hope for more substantive developments in the future.
比推02/12 06:16