Tokens, Models, and Bubbles: The Crypto × AI Game in the Primary Market

比推Published on 2026-02-12Last updated on 2026-02-12

Abstract

Based on a two-year retrospective, this article analyzes the convergence of Crypto and AI from a primary market perspective. Initially, the crypto space heavily promoted "Crypto Helps AI" through three main narratives: computation power tokenization, data tokenization, and model tokenization. However, these efforts largely resulted in what the author calls a "tokenization illusion"—projects that issued tokens but lacked real product-market fit or sustainable business models. The piece critiques these approaches: decentralized compute networks often fail to meet enterprise reliability standards; tokenized data struggles with supply-demand alignment due to low user motivation and high professional requirements; and model tokenization is fundamentally flawed since AI models are non-scarce, easily replicable, and depreciate quickly. Additionally, projects focusing on verifiable inference (like ZKML or OPML) are solutions in search of a problem, as real-world AI failures are rarely due to malicious tampering but rather design errors or misconfigurations. The author references Vitalik Buterin’s updated views, which now present a more balanced perspective compared to two years ago. Buterin outlines four quadrants of Crypto × AI integration: two where crypto (especially Ethereum) provides trustless, economic layers for AI agents and private interactions, and two where AI enhances crypto—through local LLMs acting as user shields for security and AI improving market efficiency and DA...

Author: Lao Bai

Original Title: Crypto × AI from the Primary Market Perspective: An Experiment in Tokenized Illusion


After two years, V God has tweeted again, and I’ll follow up on the research report from two years ago, even the timing is exactly the same, February 10th. (Related reading: ABCDE: Sorting Out AI+Crypto from a Primary Market Perspective)

Two years ago, V God actually implicitly expressed that he wasn’t very optimistic about the various Crypto Helps AI trends that were popular at the time. The three popular narratives in the circle back then were computing power assetization, data assetization, and model assetization. My research report two years ago mainly discussed these three narratives and some observations and doubts about them from the primary market. From V God’s perspective, he was more optimistic about AI Helps Crypto.

The examples he gave at the time were:

  • AI as a participant in the game;

  • AI as the game interface;

  • AI as the game rules;

  • AI as the game objective;

Over the past two years, we have made many attempts in Crypto Helps AI, but the results have been minimal. Many sectors and projects ended up – issuing a token and that’s it, without real commercial PMF (Product-Market Fit), which I call the "Tokenized Illusion".

1. Computing Power Assetization – Most cannot provide commercial-grade SLA, are unreliable, and frequently go offline. They can only handle simple small and medium model inference tasks, mostly serving niche markets. Revenue is not linked to the token......

2. Data Assetization – High friction, low willingness, and high uncertainty on the supply side (retail users). The demand side (enterprises) needs structured, context-dependent, professional data suppliers with trust and legal liability. DAO-based Web3 project parties find it difficult to provide this.

3. Model Assetization – Models themselves are non-scarce, replicable, fine-tunable, and rapidly depreciating process assets, not final-state assets. Hugging Face itself is a collaboration and dissemination platform, more like GitHub for ML, not an App Store for models. Therefore, so-called "decentralized Hugging Face" projects aiming to tokenize models have mostly ended in failure.

Additionally, over these two years, we have tried various forms of "Verifiable Inference," which is a typical story of looking for a nail to hit with a hammer. From ZKML to OPML to Gaming Theory, etc., even EigenLayer shifted its Restaking narrative to Verifiable AI.

But it's basically similar to what happened in the Restaking sector – few AVSs (Actively Validated Services) are willing to pay continuously for extra verifiable security.

Similarly, verifiable inference is mostly verifying "things that no one really needs to be verified." The threat model on the demand side is extremely vague – who exactly are we protecting against?

AI output errors (model capability issues) far outnumber maliciously tampered AI outputs (adversarial problems). The various security incidents on OpenClaw and Moltbook some time ago showed that the real problems come from:

  • Wrong strategy design

  • Excessive permissions granted

  • Unclear boundaries

  • Unexpected interactions between tool combinations

  • ...

Almost none of the imagined nails like "model being tampered with" or "inference process being maliciously rewritten" exist.

Last year I posted this picture, I wonder if any old-timers remember.

The ideas V God presented this time are clearly more mature than two years ago, also due to the progress we've made in privacy, X402, ERC8004, prediction markets, and other directions.

It can be seen that the four quadrants he divided this time, half belong to AI Helps Crypto, and the other half belong to Crypto Helps AI, no longer being clearly biased towards the former as it was two years ago.

Top left and bottom left – Utilizing Ethereum's decentralization and transparency to solve AI's trust and economic collaboration problems

1.Enabling trustless and private AI interaction (Infrastructure + Survival): Using technologies like ZK, FHE, etc., to ensure the privacy and verifiability of AI interactions (not sure if the verifiable inference I mentioned earlier counts).

2. Ethereum as an economic layer for AI (Infrastructure + Prosperity): Enabling AI agents to conduct economic payments, recruit other robots, pay deposits, or establish reputation systems through Ethereum, thereby building a decentralized AI architecture instead of being limited to a single giant platform.

Top right and bottom right – Utilizing AI's intelligent capabilities to optimize the user experience, efficiency, and governance of the crypto ecosystem:

3. Cypherpunk mountain man vision with local LLMs (Impact + Survival): AI as the user's "shield" and interface. For example, local LLMs (Large Language Models) can automatically audit smart contracts, verify transactions, reduce reliance on centralized front-end pages, and safeguard individual digital sovereignty.

4. Make much better markets and governance a reality (Impact + Prosperity): AI deeply participates in Prediction Markets and DAO governance. AI can act as an efficient participant, amplifying human judgment by processing information on a large scale, solving various market and governance problems such as insufficient human attention, high decision-making costs, information overload, and voter apathy.

Previously, we were desperately trying to make Crypto Help AI, while V God stood on the other side. Now we finally meet in the middle, though it seems unrelated to various XX tokenizations or some AI Layer1. I hope that looking back at today's post in two years, there will be some new directions and surprises.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7611374

Related Questions

QWhat are the three main areas of 'Crypto Helps AI' that the author identified as largely ineffective over the past two years?

AThe three main areas are: 1) Compute Power Assetization (unstable, cannot provide commercial SLA, serves only edge markets), 2) Data Assetization (high friction for suppliers, demand for structured data from professional suppliers), and 3) Model Assetization (models are non-scarce, replicable, and rapidly depreciating assets, making tokenization attempts largely fail).

QAccording to the author, what is the fundamental issue with 'Verifiable Inference' projects like ZKML and OPML?

AThe fundamental issue is that they are a solution in search of a problem ('a typical story of using a hammer to find a nail'). The demand-side threat model is extremely vague, as there is little need to verify things that no one actually needs verified. Problems like AI output errors (model capability issues) are far more common than malicious tampering of AI output (adversarial problems).

QHow did Vitalik Buterin's (V神) perspective on the Crypto x AI intersection evolve from two years ago to the present, as described in the article?

ATwo years ago, Vitalik was more skeptical of 'Crypto Helps AI' and was more optimistic about 'AI Helps Crypto'. His current perspective, as outlined in a new quadrant model, is more balanced. It includes two quadrants for 'AI Helps Crypto' (using AI to optimize crypto UX, efficiency, and governance) and two for 'Crypto Helps AI' (using Ethereum's decentralization and transparency to solve AI's trust and economic collaboration issues).

QWhat are the two specific use cases mentioned under the 'Crypto Helps AI' quadrant in Vitalik's new framework?

AThe two use cases are: 1) Enabling trustless and private AI interaction using technologies like ZK and FHE for privacy and verifiability, and 2) Using Ethereum as an economic layer for AI, allowing AI agents to conduct economic transactions, hire other bots, post bonds, or build a reputation system to create a decentralized AI architecture.

QWhat does the author mean by the term 'Tokenization Illusion' in the context of Crypto x AI projects?

AThe term 'Tokenization Illusion' refers to the phenomenon where many projects in the Crypto x AI space simply launch a token without achieving real Product-Market Fit (PMF) or creating a sustainable business model. The author argues that these projects often lack genuine commercial viability and substance beyond the token issuance itself.

Related Reads

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

DeepSeek, an AI startup founded by Liang Wenfeng, is reportedly in talks with Alibaba and Tencent for an external funding round that could value the company at over $20 billion. This marks a significant shift, as DeepSeek had previously relied solely on funding from its parent company,幻方量化 (Huanfang Quantitative), and had resisted external investment. The potential valuation would place DeepSeek among the top-tier AI model companies in China, comparable to competitors like MoonDark (valued at ~$18 billion) and ahead of recently listed firms like MiniMax and Zhipu. The funding—which could range from $600 million (for a 3% stake) to $2 billion (for 10%)—is seen as a move to secure resources for model development, retain talent, and support infrastructure needs, particularly as competition in inference models and AI agents intensifies. Both Alibaba and Tencent are eager to invest, not only for financial returns but also to integrate DeepSeek into their broader AI ecosystems. However, DeepSeek’s leadership is cautious about maintaining independence and may prefer financial investors over strategic ones to avoid being locked into a specific tech ecosystem. Alternative options, such as state-backed funds, offer longer-term capital and policy support but may come with slower decision-making and potential constraints on global expansion. With competing AI firms accelerating their IPO plans, DeepSeek’s window for securing optimal terms may be narrowing. The final decision will reflect a trade-off between capital, resources, and strategic independence.

marsbit33m ago

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

marsbit33m ago

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

After losing 97% of its market value since its 2018 peak, iQiyi is aggressively pivoting to AI in a desperate attempt to survive. At its 2026 World Conference, CEO Gong Yu announced an "AI Artist Library" with over 100 virtual performers and a new AIGC platform, "NaDou Pro," promising faster production and lower costs. This shift comes as the company faces severe financial distress: its market cap sits near delisting thresholds at $1.36 billion, with significant losses, declining membership revenue, and depleted cash flow. The AI strategy has sparked controversy. Top actors have issued legal threats against unauthorized digital replicas, while in Hengdian, over 134,000 background actors are seeing their already scarce job opportunities vanish as AI replaces them for background roles. iQiyi's move represents a fundamental shift from being a high-cost content buyer to a landlord" to becoming a "platform capitalist" that transfers production risk to creators. This contrasts with competitors like Douyin (TikTok's Chinese counterpart), which is investing heavily in *real* actor-led short dramas, betting that authentic human connection retains users better than AI-generated content. The article draws a parallel to the 1920s transition to "talkies," which made cinema musicians obsolete but ultimately enriched the art form. In contrast, iQiyi's AI drive is framed not as an artistic evolution but as a cost-cutting measure that could degrade storytelling, replacing genuine human emotion with algorithmically calculated stimulation and potentially numbing audiences' capacity for empathy. The core question remains: can a company focused solely on financial survival preserve the art of storytelling?

marsbit36m ago

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

marsbit36m ago

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

The CLARITY Act, which passed the House in July 2025 with strong bipartisan support (294-134), faces a critical juncture in the Senate. The Senate Banking Committee is expected to hold a markup soon, but key issues remain unresolved, including stablecoin yield provisions, DeFi regulations, and securing full Republican committee support. Other contentious points involve the Blockchain Regulatory Certainty Act (BRCA), ethics amendments for government officials, and SEC-related matters. The legislative calendar is tight, with limited time before the midterm elections. If the committee markup is delayed beyond mid-May, the chances of passage in 2026 drop significantly. Senator Cynthia Lummis has warned that failure this year could delay comprehensive crypto market structure legislation until 2030 or later. Galaxy estimates the probability of the CLARITY Act becoming law in 2026 is only about 50%. The bill provides crucial regulatory clarity by defining jurisdictional boundaries between the SEC and CFTC, establishing a path for decentralization, and bringing digital commodity intermediaries under federal regulation. Its passage is seen as vital before potential power shifts in the next Congress, which could bring less favorable leadership to key committees. The timeline is compressed, and the bill must compete for floor time with other priorities like Iran authorization and DHS appropriations. Key hurdles include finalizing the stablecoin yield compromise text, addressing law enforcement concerns about BRCA, and navigating political dynamics around SEC nominations. The outcome of the Banking Committee markup and the level of bipartisan support will be critical indicators of its future success.

marsbit1h ago

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

marsbit1h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片