a16z: 5 Ways Blockchain Can Help AI Agent Infrastructure

marsbitPubblicato 2026-04-21Pubblicato ultima volta 2026-04-21

Introduzione

Blockchain technology provides critical infrastructure for AI agents by addressing five key challenges: 1) Non-human identity: AI agents lack standardized, portable identity systems. Blockchain enables verifiable, cross-platform agent identities (like "Know Your Agent" frameworks) through cryptographic credentials and on-chain registries. 2) AI governance: When AI systems execute decisions, blockchain ensures transparency and prevents centralized control by recording actions on-chain and enabling auditable execution logs. 3) Payments: Stablecoins and crypto payments (e.g., x402, MPP) serve as default settlement layers for agent-to-agent commerce, enabling frictionless, programmable transactions for "headless" AI-native businesses. 4) Trust and verification: As AI scales, blockchain provides cryptographic proof of origin and auditable histories, making verification—not intelligence—the scarce resource. 5) User control: Crypto-native tools (e.g., delegation toolkits, intent-based architectures) allow users to set boundaries and maintain oversight over autonomous agents, minimizing blind trust. Together, blockchain and AI can create an economic infrastructure built on transparency, accountability, and user sovereignty.

Author: a16z

Compiled by: Hu Tao, ChainCatcher

 

AI agents are rapidly transitioning from "co-pilots" to economic actors, even faster than the surrounding infrastructure can keep up.

While agents can now perform tasks and conduct transactions, they lack standardized methods to prove their identity, authority, and how they are compensated across environments. Identity information cannot be shared across platforms, payment methods are not programmable by default, and coordination efforts are conducted in isolation.

Blockchain addresses this problem at the infrastructure layer. Public ledgers provide a record for every transaction, auditable by anyone. Wallets provide users with portable identities. Stablecoins offer an alternative settlement method. These are not distant future technologies. They are available now and can enable permissionless operation as true economic entities.

 

1. Non-Human Identity

The current bottleneck in the agent economy is no longer intelligence, but identity.

In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) already outnumbers human employees by about 100 to 1. With the large-scale deployment of modern agent frameworks (tool-using LLMs, autonomous workflows, multi-agent orchestration), this ratio is bound to rise across all industries.

Yet, these agents are effectively unbanked. They can interact with the financial system, but the manner of interaction lacks portability, verifiability, and is not trusted by default. They lack standardized ways to prove authority, operate independently across platforms, or be held accountable for their actions.

What is missing is a universal identity layer—an SSL equivalent for agents—to standardize coordination across platforms. There are significant attempts, but the approaches remain fragmented: on one side, vertically integrated, fiat-first stacks; on the other, crypto-native, open standards (like x402 and emerging agent identity proposals); and developer frameworks like MCP (Model Context Protocol) extensions trying to bridge identity at the application layer.

There is still no widely adopted, interoperable way for one agent to prove to another: who it represents, what it is allowed to do, and how it gets paid. This is the core idea of KYA (Know Your Agent).

Just as humans rely on credit history and KYC (Know Your Customer), agents need cryptographically signed credentials that bind the agent to its principal, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer for all this: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat apps, APIs, and marketplaces.

We are already seeing early implementations emerge: on-chain agent registries, wallet-native agents using USDC, ERC standards for "trust-minimized agents," and developer toolkits that combine identity with embedded payments and fraud controls.

But until a universal identity standard emerges, merchants will continue to block agents at the firewall.

 

2. Governance of AI-Operated Systems

Agents beginning to operate real systems raises new questions.

The key is who is truly in control. Imagine a community or company where AI systems coordinate critical resources, whether capital allocation or supply chain management. Even if people vote on policy changes, if the underlying AI layer is controlled by a single vendor who can push model updates, adjust constraints, or override decisions, that power is very weak. The formal governance layer might be decentralized, but the operational layer remains centralized; whoever controls the model ultimately controls the outcome.

When agents take on governance roles, they introduce a new layer of dependency. Theoretically, this could make direct democracy easier to implement: everyone could have an AI representative responsible for understanding complex proposals, weighing trade-offs, and voting based on their stated preferences.

But this vision only works if these agents are truly accountable to the people they represent, are通用 across service providers, and are technically constrained to follow human instructions. Otherwise, you end up with a system that looks democratic on the surface but is actually driven by opaque model behavior that no one really controls.

If the current reality is that agents are built from a small number of foundation models, then we need ways to prove that an agent acts in the user's interest, not the model company's. This might require cryptographic guarantees at multiple levels: (1) exactly which training data, fine-tuning process, or RL process a model instance originated from; (2) the exact prompts and instructions controlling a particular agent; (3) a record of the agent's actual behavior in the real world; and (4) reliable assurance that once deployed, the provider cannot change the instructions or retrain the agent to operate differently without the user's knowledge. Without these guarantees, agent governance ultimately devolves into governance by whoever controls the model weights.

This is where crypto comes in. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to carry out verified outcomes. If agents have cryptographic identities and transparent execution logs, people can check whether their representatives followed the rules. And if the AI layer is user-owned and portable, not locked into a single platform, then no single company can change the rules via model updates.

Ultimately, the governance of AI systems is an infrastructure challenge, not a policy challenge. Real authority depends on building enforceable guarantees into the system itself.

 

3. Filling the Gaps in Traditional Payment Systems for AI-Native Businesses

AI agents are starting to buy things—web scraping, browser sessions, image generation—and stablecoins are becoming the alternative settlement layer for these transactions. Meanwhile, a new class of agent-oriented marketplaces is taking shape. For example, the MPP marketplace by Stripe and Tempo aggregates over 60 services specifically designed for AI agents. In its first week live, it processed over 34,000 transactions with fees as low as $0.003, and stablecoins were one of the default payment methods.

The difference is in how these services are accessed. There is no checkout page. The agent reads a schema, sends a request, pays, and receives the output in one exchange. They represent a new class of "headless" merchants: just a server, a set of endpoints, and a price per call. No front-end—neither a storefront nor a sales team.

The payment rails to enable this are live. Coinbase's x402 and MPP take different approaches, but both embed payment directly into the HTTP request. Visa is also extending the card rails in a similar direction, offering a CLI tool that lets developers spend from the terminal, with merchants receiving stablecoins instantly on the backend.

The data is still early. Filtering out non-organic activity like wash trading, x402 processes around $1.6 million in agent-driven payments per month, far below the $24 million recently reported by Bloomberg (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.

There is a huge opportunity in the developer tools space. The rise of Vibe Coding has expanded the population of software developers and thus the potential market for developer tools. Companies like Merit Systems are working on future-proof solutions, launching AgentCash, a CLI wallet and marketplace platform that connects to both the MPP and x402 protocols. These products allow agents to buy the data, tools, and functions they need using stablecoins from a single account. For example, an agent for a sales team can enrich lead information using data from Apollo, Google Maps, and Whitepages by calling an endpoint, without ever leaving the command line interface.

There are several reasons why this agent-to-agent commerce leans towards crypto payments (and emerging card-based solutions). One is underwriting. When a payment processor onboards a merchant, it takes on that merchant's risk. A headless merchant with no website or legal entity is difficult for traditional processors to underwrite. Another is that stablecoins are permissionlessly programmable on open networks: any developer can make an endpoint support payments without integrating a payment processor or signing a merchant agreement.

We've seen this pattern before. Every shift in business models催生 a new class of merchants that existing systems initially struggle to serve. The companies building this infrastructure aren't betting on the $1.6 million per month revenue, but on what it will be when agents become the default buyers.

 

4. Repricing Trust in the Agent Economy

For three hundred thousand years, human cognition has been the bottleneck to progress. Today, AI is pushing the marginal cost of execution towards zero. When a scarce resource becomes abundant, the constraints shift. When intelligence becomes cheap, what becomes expensive? Verification.

In the agent economy, the real limit to scale is our biological limitation—our ability to audit and evaluate<极好的span dir="auto" style="font-size: inherit; font-family: PingFang SC,Helvetica Neue,Helvetica,Arial,Hiragino Sans GB,Heiti SC,Microsoft YaHei,WenQuanYi Micro Hei,sans-serif;"> machine decisions. Agent throughput already far exceeds human supervision capacity. Because supervision is costly and failures take time to manifest, markets tend to under-invest in supervision. "Human-in-the-loop" is quickly becoming a practical impossibility.

But deploying unverified agents creates compounding risk. Systems will relentlessly optimize for "agentic" metrics while quietly drifting from human intent, creating a false illusion of productivity that masks the massive accrual of AI debt. To safely delegate the economy to machines, trust can no longer rely on manual audits—trust must be hard-coded into the architecture itself.

When anyone can generate content for free, what matters is verifiable provenance—knowing where something came from and whether it can be trusted. Blockchain, along with on-chain attestations and decentralized digital identity systems, changes the economic boundaries of safe deployment. AI is no longer treated as a black box, but with a clear, auditable history.

As more AI agents begin to transact with each other, settlement mechanisms and provenance systems become inextricably linked. Systems for moving money—like stablecoins and smart contracts—can also carry cryptographic receipts that record who did what and who is liable if things go wrong.

The human comparative advantage keeps moving up the stack: from spotting minor errors to setting strategic direction to being the backstop when things fail. The lasting advantage will belong to those who can cryptographically certify their outputs, insure them, and stand behind them when they fail.

Scaling without verification is a risk that compounds over time.

 

5. Preserving User Control

For decades, layers of abstraction have shifted how users interact with technology. Programming languages abstracted machine code. The command line was replaced by graphical user interfaces, which then evolved into mobile apps and APIs. Each shift hid more underlying complexity while keeping the user ultimately in control.

In the agent world, users specify outcomes, not actions, and the system determines how to achieve them. Agents abstract not just how tasks are done, but also who performs them. Users set initial parameters and then recede into the background, and the system runs on its own. The user's role shifts from interaction to oversight; the system defaults to "on" unless the user intervenes.

As users delegate more tasks to agents, new risks emerge: ambiguous inputs can lead agents to act on wrong assumptions without the user's knowledge; failures might not be reported, leaving no clear path for diagnosis; a single approval could trigger multi-step workflows that no one anticipated.

This is where crypto fits in. Crypto's core has always been about minimizing the need for blind trust. As users delegate more decision-making to software, agent systems make this problem more acute and raise the bar for rigor in system design—we need clearer boundaries, more transparency, and stronger guarantees about what these systems can and cannot do.

To meet this challenge, a new generation of crypto-native tools is emerging. For example, scoped delegation frameworks like MetaMask's Delegation Toolkit, Coinbase's AgentKit and agent wallets, and Merit Systems' AgentCash allow users to define at the smart contract level what actions an agent can and cannot perform. And intent-based architectures like NEAR Intents (with cumulative DEX volume exceeding $15 billion since Q4 2024) allow users to specify desired outcomes—like "bridge tokens and stake them"—without specifying the exact implementation.

***

AI makes scale cheap, but trust hard to come by. Crypto can rebuild trust at scale.

The internet infrastructure is being built where individuals can participate in the economy directly. The question now is whether it will be designed for maximum transparency, accountability, and user control, or whether it will be built on systems that were never meant for non-human actors.

Domande pertinenti

QAccording to the article, what is the current bottleneck in the agent economy, and how can blockchain help address it?

AThe current bottleneck in the agent economy is identity, not intelligence. AI agents lack a standardized, portable, and verifiable way to prove who they represent, what they are authorized to do, and how they should be paid across different platforms. Blockchain provides a neutral coordination layer for this by offering portable identities, programmable wallets, and verifiable credentials that can be cryptographically signed and audited across applications and markets, essentially enabling a 'Know Your Agent' (KYA) framework.

QHow does the article suggest blockchain can ensure that AI systems governing communities or companies are accountable to users, not the model providers?

AThe article argues that if the AI layer running a governance system is controlled by a single provider, that provider can ultimately control the outcomes through model updates. Blockchain can provide cryptographic guarantees by recording collective decisions on-chain for automatic execution, giving agents transparent and auditable execution logs, and ensuring the AI layer is user-owned and portable rather than locked to a single platform. This prevents any one company from changing the rules via a model update and makes agents accountable to the users they represent.

QWhy are stablecoins and crypto payments becoming a preferred settlement method for AI-native, 'headless' businesses, as described in the article?

AStablecoins and crypto payments are preferred for AI-native commerce because they are programmable on open networks without requiring permission. This allows any developer to add payment functionality to an endpoint without integrating a traditional payment processor or signing a merchant agreement. Furthermore, traditional processors find it difficult to underwrite the risk of 'headless' businesses that have no website or legal entity, making crypto's permissionless nature a key advantage for this new class of automated, agent-to-agent transactions.

QThe article states that 'as intelligence becomes cheap, verification becomes expensive.' What role does blockchain play in repricing trust in the agent economy?

ABlockchain reprices trust by shifting it from costly human verification to cryptographically verifiable architecture. It provides a system for on-chain attestations and decentralized identity, giving AI agents a clear, auditable history of their actions. Settlement mechanisms like stablecoins and smart contracts can carry cryptographic receipts that record who did what and who is liable if something goes wrong. This allows for trust to be hardcoded into the system itself, which is essential for scaling safely as human oversight becomes economically impractical.

QWhat is the core cryptographic principle that the article says is crucial for maintaining user control as more decisions are delegated to AI agents?

AThe core cryptographic principle is the minimization of blind trust. As users delegate more decision-making to AI agents, it becomes critical to have systems with clearly defined boundaries, greater transparency, and strong guarantees about what these systems can and cannot do. Crypto-native tools, such as scoped delegation frameworks and intent-based architectures, allow users to define the precise actions an agent is permitted to take at the specific outcomes it should achieve, all enforced at the smart contract level to maximize user control and minimize unforeseen risks.

Letture associate

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit52 min fa

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit52 min fa

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit1 h fa

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit1 h fa

Trading

Spot
Futures
活动图片