Tokens, Models, and Bubbles: The Crypto × AI Game in the Primary Market

比推Published on 2026-02-12Last updated on 2026-02-12

Abstract

Based on a two-year retrospective, this article analyzes the convergence of Crypto and AI from a primary market perspective. Initially, the crypto space heavily promoted "Crypto Helps AI" through three main narratives: computation power tokenization, data tokenization, and model tokenization. However, these efforts largely resulted in what the author calls a "tokenization illusion"—projects that issued tokens but lacked real product-market fit or sustainable business models. The piece critiques these approaches: decentralized compute networks often fail to meet enterprise reliability standards; tokenized data struggles with supply-demand alignment due to low user motivation and high professional requirements; and model tokenization is fundamentally flawed since AI models are non-scarce, easily replicable, and depreciate quickly. Additionally, projects focusing on verifiable inference (like ZKML or OPML) are solutions in search of a problem, as real-world AI failures are rarely due to malicious tampering but rather design errors or misconfigurations. The author references Vitalik Buterin’s updated views, which now present a more balanced perspective compared to two years ago. Buterin outlines four quadrants of Crypto × AI integration: two where crypto (especially Ethereum) provides trustless, economic layers for AI agents and private interactions, and two where AI enhances crypto—through local LLMs acting as user shields for security and AI improving market efficiency and DA...

Author: Lao Bai

Original Title: Crypto × AI from the Primary Market Perspective: An Experiment in Tokenized Illusion


After two years, V God has tweeted again, and I’ll follow up on the research report from two years ago, even the timing is exactly the same, February 10th. (Related reading: ABCDE: Sorting Out AI+Crypto from a Primary Market Perspective)

Two years ago, V God actually implicitly expressed that he wasn’t very optimistic about the various Crypto Helps AI trends that were popular at the time. The three popular narratives in the circle back then were computing power assetization, data assetization, and model assetization. My research report two years ago mainly discussed these three narratives and some observations and doubts about them from the primary market. From V God’s perspective, he was more optimistic about AI Helps Crypto.

The examples he gave at the time were:

  • AI as a participant in the game;

  • AI as the game interface;

  • AI as the game rules;

  • AI as the game objective;

Over the past two years, we have made many attempts in Crypto Helps AI, but the results have been minimal. Many sectors and projects ended up – issuing a token and that’s it, without real commercial PMF (Product-Market Fit), which I call the "Tokenized Illusion".

1. Computing Power Assetization – Most cannot provide commercial-grade SLA, are unreliable, and frequently go offline. They can only handle simple small and medium model inference tasks, mostly serving niche markets. Revenue is not linked to the token......

2. Data Assetization – High friction, low willingness, and high uncertainty on the supply side (retail users). The demand side (enterprises) needs structured, context-dependent, professional data suppliers with trust and legal liability. DAO-based Web3 project parties find it difficult to provide this.

3. Model Assetization – Models themselves are non-scarce, replicable, fine-tunable, and rapidly depreciating process assets, not final-state assets. Hugging Face itself is a collaboration and dissemination platform, more like GitHub for ML, not an App Store for models. Therefore, so-called "decentralized Hugging Face" projects aiming to tokenize models have mostly ended in failure.

Additionally, over these two years, we have tried various forms of "Verifiable Inference," which is a typical story of looking for a nail to hit with a hammer. From ZKML to OPML to Gaming Theory, etc., even EigenLayer shifted its Restaking narrative to Verifiable AI.

But it's basically similar to what happened in the Restaking sector – few AVSs (Actively Validated Services) are willing to pay continuously for extra verifiable security.

Similarly, verifiable inference is mostly verifying "things that no one really needs to be verified." The threat model on the demand side is extremely vague – who exactly are we protecting against?

AI output errors (model capability issues) far outnumber maliciously tampered AI outputs (adversarial problems). The various security incidents on OpenClaw and Moltbook some time ago showed that the real problems come from:

  • Wrong strategy design

  • Excessive permissions granted

  • Unclear boundaries

  • Unexpected interactions between tool combinations

  • ...

Almost none of the imagined nails like "model being tampered with" or "inference process being maliciously rewritten" exist.

Last year I posted this picture, I wonder if any old-timers remember.

The ideas V God presented this time are clearly more mature than two years ago, also due to the progress we've made in privacy, X402, ERC8004, prediction markets, and other directions.

It can be seen that the four quadrants he divided this time, half belong to AI Helps Crypto, and the other half belong to Crypto Helps AI, no longer being clearly biased towards the former as it was two years ago.

Top left and bottom left – Utilizing Ethereum's decentralization and transparency to solve AI's trust and economic collaboration problems

1.Enabling trustless and private AI interaction (Infrastructure + Survival): Using technologies like ZK, FHE, etc., to ensure the privacy and verifiability of AI interactions (not sure if the verifiable inference I mentioned earlier counts).

2. Ethereum as an economic layer for AI (Infrastructure + Prosperity): Enabling AI agents to conduct economic payments, recruit other robots, pay deposits, or establish reputation systems through Ethereum, thereby building a decentralized AI architecture instead of being limited to a single giant platform.

Top right and bottom right – Utilizing AI's intelligent capabilities to optimize the user experience, efficiency, and governance of the crypto ecosystem:

3. Cypherpunk mountain man vision with local LLMs (Impact + Survival): AI as the user's "shield" and interface. For example, local LLMs (Large Language Models) can automatically audit smart contracts, verify transactions, reduce reliance on centralized front-end pages, and safeguard individual digital sovereignty.

4. Make much better markets and governance a reality (Impact + Prosperity): AI deeply participates in Prediction Markets and DAO governance. AI can act as an efficient participant, amplifying human judgment by processing information on a large scale, solving various market and governance problems such as insufficient human attention, high decision-making costs, information overload, and voter apathy.

Previously, we were desperately trying to make Crypto Help AI, while V God stood on the other side. Now we finally meet in the middle, though it seems unrelated to various XX tokenizations or some AI Layer1. I hope that looking back at today's post in two years, there will be some new directions and surprises.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7611374

Related Questions

QWhat are the three main areas of 'Crypto Helps AI' that the author identified as largely ineffective over the past two years?

AThe three main areas are: 1) Compute Power Assetization (unstable, cannot provide commercial SLA, serves only edge markets), 2) Data Assetization (high friction for suppliers, demand for structured data from professional suppliers), and 3) Model Assetization (models are non-scarce, replicable, and rapidly depreciating assets, making tokenization attempts largely fail).

QAccording to the author, what is the fundamental issue with 'Verifiable Inference' projects like ZKML and OPML?

AThe fundamental issue is that they are a solution in search of a problem ('a typical story of using a hammer to find a nail'). The demand-side threat model is extremely vague, as there is little need to verify things that no one actually needs verified. Problems like AI output errors (model capability issues) are far more common than malicious tampering of AI output (adversarial problems).

QHow did Vitalik Buterin's (V神) perspective on the Crypto x AI intersection evolve from two years ago to the present, as described in the article?

ATwo years ago, Vitalik was more skeptical of 'Crypto Helps AI' and was more optimistic about 'AI Helps Crypto'. His current perspective, as outlined in a new quadrant model, is more balanced. It includes two quadrants for 'AI Helps Crypto' (using AI to optimize crypto UX, efficiency, and governance) and two for 'Crypto Helps AI' (using Ethereum's decentralization and transparency to solve AI's trust and economic collaboration issues).

QWhat are the two specific use cases mentioned under the 'Crypto Helps AI' quadrant in Vitalik's new framework?

AThe two use cases are: 1) Enabling trustless and private AI interaction using technologies like ZK and FHE for privacy and verifiability, and 2) Using Ethereum as an economic layer for AI, allowing AI agents to conduct economic transactions, hire other bots, post bonds, or build a reputation system to create a decentralized AI architecture.

QWhat does the author mean by the term 'Tokenization Illusion' in the context of Crypto x AI projects?

AThe term 'Tokenization Illusion' refers to the phenomenon where many projects in the Crypto x AI space simply launch a token without achieving real Product-Market Fit (PMF) or creating a sustainable business model. The author argues that these projects often lack genuine commercial viability and substance beyond the token issuance itself.

Related Reads

After $1.26 Trillion: Why Are Circle and Stripe Rushing to Pay 'Wages' to AI Agents?

The article discusses the significant rise of stablecoins, particularly USDC, as the preferred payment method for AI agents. In March 2026, Circle and Stripe are competing to build stablecoin infrastructure for AI agent payments, with USDC processing $1.26 trillion in transactions, accounting for 70% of stablecoin activity. Key points include: - AI agents require programmable, instant, low-friction payment systems, which traditional finance (banks, credit cards) cannot provide. Stablecoins on blockchain meet these needs with 24/7 transfers, smart contract automation, and price stability. - Data shows 98.6% of AI agent payments on platforms like Stripe's x402 use USDC, indicating stablecoins are becoming the default for machine-to-machine transactions. - Regulatory developments are supporting this growth: Hong Kong is issuing its first stablecoin licenses, the US OCC has proposed a federal framework, and the EU has MiCA regulations, signaling global institutional adoption. - Stablecoins act as a "blood system" connecting the digital and real economies, facilitating both internal digital transactions (e.g., tokenized assets) and external fiat conversions. - Risks include security vulnerabilities, regulatory fragmentation, and market instability, but the trend is clear: stablecoins are evolving from crypto tools to essential infrastructure for AI-driven economies. The article concludes that as AI agents autonomously transact, stablecoins will be critical infrastructure, urging businesses and investors to prepare for this shift.

marsbit1h ago

After $1.26 Trillion: Why Are Circle and Stripe Rushing to Pay 'Wages' to AI Agents?

marsbit1h ago

From 'Collective Intelligence' to 'Super Individuals': How AI is Reshaping DAOs and the Ethereum Ecosystem?

From "Collective Intelligence" to "Super-Individual": How AI is Reshaping DAOs and the Ethereum Ecosystem AI is fundamentally transforming how work and governance are structured in Web3. While DAOs have long symbolized decentralized collective intelligence, AI is now enabling a shift toward the "individual + AI" unit, where a single person, augmented by AI agents, can perform tasks that previously required entire teams—such as research, trading, asset management, and governance. This shift raises a critical question: Is this beneficial for DAOs and the broader crypto ecosystem? AI addresses key DAO challenges like inefficient information processing, complex decision-making, and high participation costs by automating governance processes, analyzing proposals, and executing on-chain operations. This allows DAOs to operate with smaller core teams while significantly improving efficiency. For AI to participate in the on-chain economy, it requires asset custody, transaction execution, and trusted settlement—capabilities native to blockchain. Initiatives like Ethereum Foundation’s dAI team and the ERC-8004 standard aim to establish trust and verification for AI agents in a decentralized context. Wallets are evolving into "Agent Wallets," enabling non-custodial authorizations, cross-chain asset management, and human-AI collaboration through restricted sub-wallets and automated execution within set limits. Ethereum is positioning itself as the financial infrastructure for the AI economy, offering a trusted settlement layer for AI-driven activities. With its growing staking economy and mature DeFi ecosystem, Ethereum could serve as the neutral base where AI agents across platforms settle value and establish trust. In summary, AI and crypto convergence is reshaping organizations and infrastructure: AI amplifies individual capability and automates execution, while blockchain provides secure and decentralized settlement. Ethereum and crypto wallets are poised to become key interfaces connecting humans, AI, and the on-chain world.

marsbit2h ago

From 'Collective Intelligence' to 'Super Individuals': How AI is Reshaping DAOs and the Ethereum Ecosystem?

marsbit2h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片