a16z: How Blockchain Fills the Gaps in AI Agent Identity, Payments, and Trust?

marsbit2026-04-23 tarihinde yayınlandı2026-04-23 tarihinde güncellendi

Özet

AI Agents are rapidly evolving from tools into autonomous economic participants, but they lack standardized ways to prove identity, authorization, and payment across environments. Blockchain infrastructure addresses these gaps by providing portable identities via wallets, programmable payments through stablecoins, and auditable transaction records on public ledgers. Key challenges include the absence of a universal identity layer (like "KYA" – Know Your Agent) for non-human entities, governance risks when AI systems control critical resources, and the need for trustless payment systems for agent-to-agent commerce. Emerging solutions include on-chain agent registries, encrypted credentials, and crypto-native toolkits that enable delegated authority and verifiable execution. Stablecoins and embedded payment protocols (e.g., x402) are enabling a new class of "headless merchants" where agents transact without human intervention. Trust shifts from manual oversight to cryptographic verification, as scalability demands automated accountability. Ultimately, blockchain allows agents to operate as permissionless economic actors while preserving user control through programmable constraints and transparent governance.

AI Agents are evolving from auxiliary tools into genuine economic participants at a pace far exceeding other infrastructures.

Although Agents can now perform tasks and transactions, they still lack a standard way to prove "who I am," "what I am authorized to do," and "how I should be compensated" across environments. Identity is not portable, payments are not programmable by default, and collaboration remains siloed.

Blockchain is addressing these issues at the infrastructure level. Public ledgers provide verifiable credentials for every transaction that anyone can audit; wallets grant Agents portable identities; stablecoins serve as an alternative settlement layer. These are not futuristic concepts—they are available today, enabling Agents to operate as true economic entities in a permissionless manner.

Providing Identity for Non-Humans

The current bottleneck in the Agent economy is no longer intelligence, but identity.

In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. With the large-scale deployment of modern Agent frameworks (tool-calling LLMs, autonomous workflows, multi-Agent orchestration), this ratio will continue to rise across industries.

However, these Agents are effectively "unbanked." They can interact with the financial system but cannot do so in a portable, verifiable, and inherently trusted manner. They lack standardized ways to prove their permissions, operate independently across platforms, or be held accountable for their actions.

What's missing is a universal identity layer—an SSL equivalent for Agents, enabling standardized collaboration across platforms. Current solutions remain fragmented: on one side, vertically integrated, fiat-first stacks; on the other, crypto-native open standards (like x402 and emerging Agent identity initiatives); and developer framework extensions (like MCP, Model Context Protocol) attempting to bridge application-layer identities.

There is still no widely adopted, interoperable way for one Agent to prove to another: who it represents, what it is allowed to do, and how it should be paid.

This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit histories and KYC (Know Your Customer), Agents will need cryptographically signed credentials binding them to principals, permissions, constraints, and reputations.

Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be parsed in chat apps, APIs, and marketplaces.

We are already seeing early implementations emerge: on-chain Agent registries, wallet-native Agents using USDC, ERC standards for "minimally trusted Agents," and developer toolkits combining identity with embedded payments and fraud controls.

But until a universal identity standard emerges, merchants will continue to block Agents at the firewall.

Governing the Systems AI Operates

As Agents begin to take over real systems, a new problem arises: who truly has control? Imagine a community or company where AI systems coordinate critical resources—whether allocating capital or managing supply chains.

Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider that can push model updates, adjust constraints, or override decisions, this authority is very fragile. The formal governance layer might be decentralized, but the operational layer remains centralized—whoever controls the model ultimately controls the outcome.

When Agents assume governance roles, they introduce a new layer of dependency. Theoretically, this could make direct democracy more feasible: everyone could have an AI agent helping them understand complex proposals, model trade-offs, and vote based on established preferences.

But this vision can only be realized if Agents are truly accountable to the people they represent, are portable across providers, and are technically constrained to follow human instructions. Otherwise, you get a system that appears democratic on the surface but is actually manipulated by opaque model behavior that no one truly controls.

If the current reality is that Agents are primarily built on a handful of foundation models, we need ways to prove that an Agent is acting in the user's interest, not the model company's.

This will likely require cryptographic guarantees at multiple levels:

(1) The training data, fine-tuning, or reinforcement learning on which the model instance is based;

(2) The exact prompts and instructions a specific Agent follows;

(3) A record of its actual behavior in the real world;

(4) Trustworthy assurances that the provider cannot change its instructions or retrain it without the user's knowledge after deployment. Without these guarantees, Agent governance devolves into governance by whoever controls the model weights.

This is where cryptography can be particularly effective. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly adhere to verified outcomes. If Agents have cryptographic identities and transparent execution logs, people can check if their agents are operating within bounds.

If the AI layer is user-owned and portable, rather than locked into a single platform, no company can change the rules with a single model update.

Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy one. Real authority depends on building enforceable guarantees into the systems themselves.

Filling the Gaps in Traditional Payment Systems for AI-Native Businesses

As AI Agents begin to purchase various services—web scraping, browser sessions, image generation—stablecoins are becoming an alternative settlement layer for these transactions. Simultaneously, a new class of markets for Agents is emerging.

For example, the MPP marketplace by Stripe and Tempo aggregates over 60 services specifically designed for AI Agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, and stablecoins were one of the default payment methods.

The difference lies in how these services are accessed: There is no checkout page. An Agent reads a schema, sends a request, pays, and receives the output, all in a single exchange.

This represents a new class of identity-less merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.

The payment rails to enable this are live. Coinbase's x402 and MPP take different approaches, but both embed payment directly into HTTP requests. Visa is also extending card payment rails in a similar direction, offering a CLI tool that lets developers spend from the terminal, with merchants receiving stablecoins instantly on the backend.

The data is still early. After filtering out non-organic activity like spam, x402 processes approximately $1.6 million per month in Agent-driven payments, far below the $24 million recently reported by Bloomberg (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.

Developer tools represent a major opportunity, as "vibe coding" expands the pool of people who can build software, increasing the total addressable market for developer tools. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products allow Agents to purchase needed data, tools, and capabilities using stablecoins from a single balance.

For example, a sales team's Agent could call an endpoint to simultaneously enrich lead data from Apollo, Google Maps, and Whitepages, all without the user leaving the command line.

This Agent-to-Agent commerce favors crypto payment rails (and emerging card-based solutions) for several reasons.

One is underwriting risk: Traditional payment processors assume merchant risk when onboarding, and a headless merchant without a website or legal entity is difficult for traditional processors to underwrite.

Another is the permissionless programmability of stablecoins on open networks: Any developer can enable payment support for an endpoint without integrating a payment processor or signing a merchant agreement.

We've seen this pattern before. Every shift in commerce creates a new class of merchant that existing systems initially struggle to serve. The companies building this infrastructure are betting not on the current $1.6 million per month, but on what that number looks like when Agents become the default buyers.

Repricing Trust in the Agent Economy

For the past 300,000 years, human cognition has been the bottleneck for progress. Today, AI is driving the marginal cost of execution toward zero. When a scarce resource becomes abundant, the constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.

In the Agent economy, the real limit to scale is our biologically constrained ability to audit and underwrite machine decisions. Agent throughput already far exceeds human supervisory capacity. Because supervision is costly and failure is lagging, markets tend to underinvest in oversight. "Human-in-the-loop" is quickly becoming a physical impossibility.

But deploying unverified Agents introduces compound risk. Systems relentlessly optimize for "proxy" metrics while quietly drifting from human intent, creating a facade of productivity that masks the accumulation of massive AI debt. To safely delegate the economy to machines, trust can no longer rely on manual checks—trust must be hardcoded into the system architecture itself.

When anyone can generate content for free, what matters most is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain proofs, and decentralized digital identity systems are changing the economic boundaries of what can be safely deployed. You no longer treat AI as a black box; you get a clear, auditable history.

As more AI Agents begin to transact with each other, settlement rails and provenance proofs are starting to merge.

Systems that handle funds (like stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is responsible if something goes wrong.

Human comparative advantage will migrate upward: from spotting small errors to setting strategic direction and absorbing liability when things go wrong. The enduring advantage will belong to those who can cryptographically certify outputs, insure them, and absorb responsibility for failure.

Scale without verification is a liability that compounds over time.

Maintaining User Control

For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code; the command line gave way to the graphical user interface, followed by mobile apps and APIs. Each shift hid more underlying complexity but always kept the user firmly in the loop.

In the Agent world, users specify outcomes, not specific actions, and the system decides how to achieve them. Agents abstract not only how a task is executed but also by whom. Users set initial conditions and then step back, letting the system run. The user's role shifts from interaction to supervision; the default state is "on" unless the user intervenes.

As users delegate more tasks to Agents, new risks emerge: Vague inputs can lead Agents to act on wrong assumptions without the user's knowledge; failures might not be reported, preventing clear diagnosis; a single approval could trigger a multi-step workflow no one anticipated.

This is where cryptography can help. Cryptography has always been about minimizing blind trust.

As users cede more decisions to software, Agent systems make this problem more acute, raising the bar for design rigor—by setting clearer limits, increasing visibility, and enforcing stronger guarantees about system capabilities.

A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent wallets, and Merit Systems' AgentCash—allow users to define what an Agent can and cannot do at the smart contract level. Intent-based architectures (like NEAR Intents, which has processed over $15 billion in cumulative DEX volume since Q4 2024) let users simply specify desired outcomes (e.g., "bridge tokens and stake") without dictating how to achieve them.

İlgili Sorular

QWhat is the core issue currently limiting AI Agent economies according to the article?

AThe core issue is the lack of a standardized, portable, and verifiable identity layer for AI Agents, which prevents them from proving who they are, what they are authorized to do, and how they should be paid in a cross-platform, trust-minimized way.

QHow do blockchain technologies specifically help address the identity problem for AI Agents?

ABlockchain provides a neutral coordination layer with portable identities (wallets), programmable money, and verifiable credentials that can be parsed across chat applications, APIs, and marketplaces, enabling Agents to operate as permissionless economic actors.

QWhat new risk emerges when AI Agent systems begin to govern real-world resources?

AThe risk is that formal governance may appear decentralized, but if the underlying AI layer is controlled by a single provider that can push model updates or override decisions, then the actual control remains centralized. This creates a system where governance is effectively controlled by whoever controls the model weights, not the users.

QWhy are stablecoins and crypto payment rails becoming a preferred settlement method for AI-native, agent-to-agent commerce?

AStablecoins offer permissionless programmability on open networks, allowing any developer to enable payments on an endpoint without a traditional processor or merchant agreement. They also solve the underwriting risk problem posed by 'headless merchants' that lack a website or legal entity, which are difficult for traditional processors to underwrite.

QAs the marginal cost of AI execution trends toward zero, what does the article identify as the new scarce and valuable resource?

AThe new scarce and valuable resource is verification. In an economy of scale with AI Agents, the true constraint is our biologically limited ability to audit and underwrite machine decisions. Trust must be hardcoded into the system architecture itself through cryptographic proofs and verifiable on-chain records, rather than relying on manual human checks.

İlgili Okumalar

İşlemler

Spot
Futures
活动图片