Written by: a16z crypto
Compiled by: AididiaoJP, Foresight News
AI Agents are evolving from auxiliary tools into genuine economic participants at a pace far exceeding other infrastructure.
Although Agents can now perform tasks and transactions, they still lack a standard way across environments to prove "who I am," "what I am authorized to do," and "how I should be paid." Identities are not portable, payments are not programmable by default, and collaboration remains siloed.
Blockchain is addressing these issues at the infrastructure level. Public ledgers provide verifiable credentials for every transaction that anyone can audit; wallets grant Agents portable identities; stablecoins serve as an alternative settlement layer. These are not futuristic concepts—they are available today, enabling Agents to operate as true economic entities in a permissionless manner.
Providing Identity for Non-Humans
The current bottleneck in the Agent economy is no longer intelligence, but identity.
In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-calling LLMs, autonomous workflows, multi-agent orchestration) are deployed at scale, this ratio will continue to rise across industries.
However, these Agents are effectively "unbanked." They can interact with the financial system, but not in a portable, verifiable, and inherently trusted manner. They lack a standardized way to prove their permissions, operate independently across platforms, or take responsibility for their actions.
What's missing is a universal identity layer—the SSL equivalent for Agents, enabling standardized collaboration across platforms. Current solutions remain fragmented: on one side, vertically integrated, fiat-first stacks; on the other, crypto-native open standards (like x402 and emerging Agent identity proposals); and developer framework extensions (like MCP, Model Context Protocol) attempting to bridge application-layer identities.
There is still no widely adopted, interoperable way for one Agent to prove to another: who it represents, what it is allowed to do, and how it should be paid.
This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit histories and KYC (Know Your Customer), Agents will need cryptographically signed credentials binding them to principals, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be parsed across chat apps, APIs, and marketplaces.
We are already seeing early implementations emerge: on-chain Agent registries, wallet-native Agents using USDC, ERC standards for "minimally trusted Agents," and developer toolkits combining identity with embedded payments and fraud controls.
But until a universal identity standard emerges, merchants will continue to block Agents at the firewall.
Governing the Systems AI Operates
As Agents begin to take over real systems, a new problem arises: who truly holds control? Imagine a community or company where AI systems coordinate critical resources—whether allocating capital or managing supply chains. Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider that can push model updates, adjust constraints, or override decisions, this authority is very fragile. The formal governance layer might be decentralized, but the operational layer remains centralized—whoever controls the model ultimately controls the outcome.
When Agents assume governance roles, they introduce a new layer of dependency. Theoretically, this could make direct democracy more feasible: everyone could have an AI agent helping them understand complex proposals, model trade-offs, and vote based on established preferences. But this vision is only possible if Agents are truly accountable to the people they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, you get a system that appears democratic on the surface but is actually manipulated by opaque model behavior that no one truly controls.
If the current reality is that Agents are primarily built on a handful of foundation models, we need ways to prove that an Agent is acting in the user's interest, not the model company's. This will likely require cryptographic guarantees at multiple levels: (1) the training data, fine-tuning, or reinforcement learning the model instance is based on; (2) the exact prompts and instructions the specific Agent follows; (3) a record of its actual behavior in the real world; (4) credible assurances that the provider cannot change its instructions or retrain it without the user's knowledge after deployment. Without these guarantees, Agent governance devolves into governance by whoever controls the model weights.
This is where cryptography is particularly well-suited to help. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly adhere to verified outcomes. If Agents have cryptographic identities and transparent execution logs, people can check if their agents are acting within bounds. If the AI layer is user-owned and portable, not locked to a single platform, then no company can change the rules with a single model update.
Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy one. Real authority depends on building enforceable guarantees into the systems themselves.
Filling the Gaps in Traditional Payment Systems for AI-Native Businesses
As AI Agents begin to purchase various services—web scraping, browser sessions, image generation—stablecoins are becoming an alternative settlement layer for these transactions. Simultaneously, a new class of markets for Agents is emerging. For example, the MPP marketplace by Stripe and Tempo aggregates over 60 services specifically designed for AI Agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, and stablecoins were one of the default payment methods.
The difference lies in how these services are accessed: they have no checkout page. An Agent reads a schema, sends a request, pays, and receives the output, all in a single exchange. This represents a new class of identity-less merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.
Payment rails enabling this are already live. Coinbase's x402 and MPP take different approaches, but both embed payment directly into HTTP requests. Visa is also expanding card payment rails in a similar direction, offering a CLI tool that allows developers to spend from the terminal, with merchants receiving stablecoins instantly on the backend.
Data is still early. After filtering out non-organic activity like spam, x402 processes approximately $1.6 million in Agent-driven payments monthly, far below the $24 million recently reported by Bloomberg (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have already integrated x402 into their platforms.
Developer tools represent a significant opportunity, as "vibe coding" expands the pool of people who can build software, increasing the total addressable market for dev tools. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products allow Agents to purchase needed data, tools, and capabilities using stablecoins from a single balance. For example, a sales team's Agent could call an endpoint to enrich lead data simultaneously from Apollo, Google Maps, and Whitepages, all without the user leaving the command line.
This Agent-to-Agent commerce favors crypto rails (and emerging card-based solutions) for several reasons. One is underwriting risk: traditional payment processors assume merchant risk when onboarding, and a headless merchant without a website or legal entity is difficult for traditional processors to underwrite. Another is the permissionless programmability of stablecoins on open networks: any developer can enable payments on an endpoint without integrating a payment processor or signing a merchant agreement.
We've seen this pattern before. Every shift in the form of commerce creates a new class of merchant that existing systems initially struggle to serve. The companies building this infrastructure are betting not on the current $1.6 million per month, but on what that number looks like when Agents become the default buyers.
Repricing Trust in the Agent Economy
For the past 300,000 years, human cognition has been the bottleneck of progress. Today, AI is pushing the marginal cost of execution toward zero. When a scarce resource becomes abundant, the constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.
In the Agent economy, the real limit to scale is our biologically constrained ability to audit and underwrite machine decisions. The throughput of Agents already far exceeds human supervisory capacity. Because supervision is costly and failure is lagging, markets tend to under-invest in oversight. "Human-in-the-loop" is quickly becoming physically impossible.
But deploying unverified Agents introduces compound risk. Systems relentlessly optimize for "proxy" metrics while quietly drifting from human intent, creating a facade of productivity that masks the accumulation of massive AI debt. To safely delegate the economy to machines, trust can no longer rely on manual checks—trust must be hard-coded into the system architecture itself.
When anyone can generate content for free, what matters most is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain proofs, and decentralized digital identity systems are changing the economic boundaries of what can be safely deployed. You no longer treat AI as a black box; you get a clear, auditable history.
As more AI Agents begin to transact with each other, settlement rails and provenance proofs begin to fuse. Systems handling funds (like stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is liable if things go wrong.
The human comparative advantage will migrate upward: from spotting small errors to setting strategic direction and absorbing liability when things fail. The enduring advantage will belong to those who can cryptographically certify outputs, insure them, and absorb responsibility for failure.
Scale without verification is a liability that compounds over time.
Maintaining User Control
For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code; the command line gave way to graphical user interfaces, followed by mobile apps and APIs. Each shift hid more underlying complexity but kept the user firmly in the loop.
In the Agent world, users specify outcomes, not specific actions, and the system figures out how to achieve them. Agents abstract not only the execution of tasks but also who performs them. Users set initial parameters and then step back, letting the system run on its own. The user's role shifts from interaction to supervision; the default state is "on" unless the user intervenes.
As users delegate more tasks to Agents, new risks emerge: ambiguous inputs can lead Agents to act on false assumptions unbeknownst to the user; failures might go unreported, preventing clear diagnosis; a single approval could trigger multi-step workflows no one anticipated.
This is where cryptography can help. Cryptography has always been about minimizing blind trust. As users delegate more decisions to software, Agent systems make this problem more acute and raise the bar for design rigor—by setting clearer limits, increasing visibility, and enforcing stronger guarantees about system capabilities.
A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent wallets, and Merit Systems' AgentCash—allow users to define at the smart contract level what an Agent can and cannot do. Intent-based architectures (like NEAR Intents, which has processed over $15 billion in cumulative DEX volume since Q4 2024) let users simply specify desired outcomes (e.g., "bridge tokens and stake") without dictating how to achieve them.











