a16z: Scaling AI Without Cryptographic Verification Is a Dangerous Liability

marsbitОпубликовано 2026-04-23Обновлено 2026-04-23

Введение

A16z argues that scaling AI without cryptographic verification is a dangerous liability. As AI agents rapidly evolve from tools into autonomous economic participants, they currently lack standardized, portable identities, verifiable permissions, and programmable payment methods. This creates systemic risks in an economy where non-human entities already vastly outnumber human users in sectors like finance. Blockchain infrastructure offers a solution by providing a neutral coordination layer. It enables verifiable, on-chain credentials for agent identity (a "Know Your Agent" standard), ensures transparent governance to prevent centralized control of AI systems, and facilitates native payments through stablecoins and emerging markets for AI-to-AI commerce. Without cryptographic guarantees—such as auditable transaction records, constrained agent behavior, and proof of origin—scaling AI agents accumulates unmanaged risk. Trust deficit, not intelligence, becomes the bottleneck. The authors conclude that cryptographic verification is essential to maintain user control, ensure accountability, and safely delegate economic activity to autonomous systems.

Original Source: a16z crypto

Original Compilation: AididiaoJP, Foresight News

AI Agents are evolving from auxiliary tools into genuine economic participants at a pace far exceeding other infrastructure.

Although Agents can now perform tasks and transactions, they still lack a standard, cross-environment way to prove "who I am," "what I am authorized to do," and "how I should be paid." Identity is not portable, payments are not programmable by default, and collaboration remains siloed.

Blockchain is addressing these issues at the infrastructure level. Public ledgers provide verifiable, auditable credentials for every transaction; wallets grant Agents portable identity; stablecoins serve as an alternative settlement layer. These are not futuristic concepts—they are available today, enabling Agents to operate as true economic actors in a permissionless manner.

Providing Identity for Non-Humans

The current bottleneck in the Agent economy is no longer intelligence, but identity.

In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-calling LLMs, autonomous workflows, multi-agent orchestration) are deployed at scale, this ratio will continue to rise across industries.

Yet, these Agents remain effectively "unbanked." They can interact with the financial system, but not in a portable, verifiable, and inherently trusted manner. They lack a standardized way to prove their permissions, operate independently across platforms, or be held accountable for their actions.

What's missing is a universal identity layer—an SSL equivalent for Agents—that standardizes collaboration across platforms. Current solutions are fragmented: on one side, vertically integrated, fiat-first stacks; on another, crypto-native open standards (like x402 and emerging Agent identity proposals); and extensions to developer frameworks attempting to bridge application-layer identity (like MCP, Model Context Protocol).

There is still no widely adopted, interoperable way for one Agent to prove to another who it represents, what it is permitted to do, and how it should be paid.

This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit histories and KYC (Know Your Customer), Agents will need cryptographically signed credentials binding them to a principal, permissions, constraints, and reputation.

Blockchain provides a neutral coordination layer: portable identity, programmable wallets, and verifiable proofs that can be parsed across chat apps, APIs, and marketplaces.

We are already seeing early implementations emerge: on-chain Agent registries, wallet-native Agents using USDC, ERC standards for "minimally trusted Agents," and developer toolkits combining identity with embedded payments and fraud controls.

But until a universal identity standard emerges, merchants will continue to block Agents at the firewall.

Governing Systems Run by AI

As Agents begin to take over real systems, a new question arises: who truly holds control? Imagine a community or company where key resources (whether allocating funds or managing supply chains) are coordinated by AI systems.

Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider—able to push model updates, adjust constraints, or override decisions—this authority is very fragile. The formal governance layer might be decentralized, but the operational layer remains centralized—whoever controls the model ultimately controls the outcome.

When Agents take on governance roles, they introduce a new layer of dependency. In theory, this could make direct democracy more feasible: everyone could have an AI agent helping them understand complex proposals, model trade-offs, and vote based on established preferences.

But this vision only works if Agents are accountable to the people they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, you get a system that appears democratic on the surface but is actually manipulated by opaque model behavior that no one truly controls.

If the current reality is that Agents are primarily built on a handful of foundation models, we need ways to prove that an Agent is acting in the user's interest, not the model company's.

This will likely require cryptographic guarantees at multiple levels:

(1) The training data, fine-tuning, or reinforcement learning the model instance is based on;

(2) The exact prompts and instructions the specific Agent follows;

(3) A record of its actual behavior in the real world;

(4) Trustworthy assurances that the provider cannot change its instructions or retrain it without the user's knowledge after deployment. Without these guarantees, Agent governance devolves into governance by whoever controls the model weights.

This is where cryptography is particularly powerful. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly follow verified outcomes. If Agents have cryptographic identities and transparent execution logs, people can check if their agent is operating within bounds.

If the AI layer is user-owned and portable, not locked into a single platform, then no company can change the rules with a single model update.

Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy one. Real authority depends on building enforceable guarantees into the systems themselves.

Filling the Gaps of Traditional Payment Systems for AI-Native Businesses

As AI Agents begin to purchase various services—web scraping, browser sessions, image generation—stablecoins are becoming an alternative settlement layer for these transactions. Simultaneously, a new class of markets for Agents is emerging.

For example, the MPP marketplace by Stripe and Tempo aggregates over 60 services specifically for AI Agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, with stablecoins being one of the default payment methods.

The difference lies in how these services are accessed: there is no checkout page. The Agent reads a schema, sends a request, pays, and receives the output—all in a single exchange.

This represents a new class of identity-less merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.

The payment rails to enable this are live. Coinbase's x402 and MPP take different approaches, but both embed payment directly into HTTP requests. Visa is also extending card payment rails in a similar direction, offering a CLI tool that lets developers spend from the terminal, with merchants receiving stablecoins instantly on the backend.

The data is still early. After filtering out non-organic activity like spam, x402 processes about $1.6 million in Agent-driven payments per month, far less than the $24 million recently reported by Bloomberg (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have all integrated x402 into their platforms.

Developer tools represent a major opportunity, as "vibe coding" expands the pool of people who can build software, increasing the total addressable market for dev tools. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products allow Agents to purchase the data, tools, and capabilities they need using stablecoins from a single balance.

For example, a sales team's Agent could call an endpoint to simultaneously enrich lead data from Apollo, Google Maps, and Whitepages, all without the user leaving the command line.

This Agent-to-Agent commerce favors crypto payment rails (and emerging card-based solutions) for several reasons.

One is underwriting risk: Traditional payment processors take on merchant risk when onboarding, and a headless merchant with no website or legal entity is difficult for traditional processors to underwrite.

Another is the permissionless programmability of stablecoins on open networks: Any developer can make an endpoint support payments without connecting to a payment processor or signing a merchant agreement.

We've seen this pattern before. Every shift in the nature of commerce creates a new class of merchant that existing systems initially struggle to serve. The companies building this infrastructure are betting not on the $1.6 million per month, but on what that number looks like when Agents become the default buyers.

Repricing Trust in the Agent Economy

For the past 300,000 years, human cognition has been the bottleneck of progress. Today, AI is driving the marginal cost of execution toward zero. When a scarce resource becomes abundant, the constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.

In the Agent economy, the real limit to scale is our biologically limited ability to audit and underwrite machine decisions. The throughput of Agents already far exceeds human supervisory capacity. Because supervision is costly and failure is lagging, markets tend to underinvest in oversight. "Human-in-the-loop" is quickly becoming a physical impossibility.

But deploying unverified Agents introduces compound risk. Systems relentlessly optimize for "proxy" metrics while quietly drifting from human intent, creating a facade of productivity that masks the accumulation of massive AI debt. To safely delegate the economy to machines, trust can no longer rely on manual checks—trust must be hardcoded into the system architecture itself.

When anyone can generate content for free, what matters most is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain attestations, and decentralized digital identity systems are changing the economic boundaries of what can be safely deployed. You no longer treat AI as a black box; you get a clear, auditable history.

As more AI Agents begin to transact with each other, settlement rails and provenance proofs begin to fuse.

Systems that handle funds (like stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is liable if something goes wrong.

Human comparative advantage will migrate upward: from spotting small errors to setting strategic direction and absorbing liability when things go wrong. The lasting advantage will belong to those who can cryptographically certify outputs, insure them, and absorb responsibility for failure.

Scaling without verification is a liability that compounds over time.

Maintaining User Control

For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code; the command line gave way to the graphical user interface, followed by mobile apps and APIs. Each shift hid more underlying complexity but always kept the user firmly in the loop.

In the Agent world, the user specifies the outcome, not the specific actions, and the system decides how to achieve it. Agents abstract not just how a task is executed, but also by whom. The user sets initial parameters and then steps back, letting the system run. The user's role shifts from interaction to supervision; the default state is "on" unless the user intervenes.

As users delegate more tasks to Agents, new risks emerge: Vague inputs can lead Agents to act on wrong assumptions without the user's knowledge; failures might not be reported, preventing clear diagnosis; a single approval could trigger a multi-step workflow no one anticipated.

This is where crypto can help. Crypto has always been about minimizing blind trust.

As users cede more decisions to software, Agent systems sharpen this problem and raise the bar for design rigor—by setting clearer limits, increasing visibility, and enforcing stronger guarantees about system capabilities.

A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent wallets, and Merit Systems' AgentCash—let users define at the smart contract level what an Agent can and cannot do. Intent-based architectures (like NEAR Intents, which has processed over $15 billion in cumulative DEX volume since Q4 2024) let users simply specify a desired outcome (e.g., "bridge tokens and stake") without specifying how to achieve it.

Связанные с этим вопросы

QAccording to the article, what is the current bottleneck for the Agent economy, and why?

AThe current bottleneck for the Agent economy is identity, not intelligence. This is because while AI Agents can perform tasks and transactions, they lack a standardized, portable, and verifiable way to prove 'who they are', 'what they are authorized to do', and 'how they should be paid' across different environments.

QHow do blockchains specifically address the identity problem for AI Agents?

ABlockchains provide a neutral coordination layer that offers portable identities, programmable wallets, and verifiable proofs that can be parsed across chat applications, APIs, and marketplaces. Public ledgers provide auditable credentials for every transaction, and wallets give Agents a portable identity, enabling them to operate as permissionless economic actors.

QWhat is the core risk identified when AI Agents begin to govern real-world systems without proper infrastructure?

AThe core risk is that formal governance may appear decentralized (e.g., people can vote on policy changes), but the operational layer remains centralized. Whoever controls the underlying AI model (its weights, updates, and constraints) ultimately controls the outcomes, leading to a system that is superficially democratic but actually manipulated by opaque model behavior.

QWhy are crypto payment rails, like stablecoins, particularly suited for AI-native, 'headless' merchant services?

ACrypto payment rails are suited for headless merchants (servers with endpoints and a price per call, but no front-end or legal entity) for two main reasons: 1) Underwriting Risk: Traditional payment processors struggle to underwrite the risk of a merchant with no website or legal entity. 2) Permissionless Programmability: Stablecoins on open networks allow any developer to enable payments on an endpoint without needing to integrate a traditional payment processor or sign a merchant agreement.

QThe article states that 'scaling without verification is a liability that compounds over time.' What becomes the new scarce and valuable resource in an Agent economy, and why?

AIn an Agent economy where intelligence becomes cheap and abundant, verification becomes the new scarce and valuable resource. This is because the ability of humans to audit and underwrite machine decisions is biologically limited. The throughput of Agents far exceeds human supervision capacity, making it physically impossible to keep a 'human in the loop.' Therefore, trust must be hardcoded into the system architecture itself through cryptographic verification to avoid the accumulation of massive, hidden 'AI debt'.

Похожее

Торговля

Спот
Фьючерсы

Популярные статьи

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

2025 год — год институциональных инвесторов, в будущем он будет доминировать в приложениях реального времени.

1.8k просмотров всегоОпубликовано 2025.12.16Обновлено 2025.12.16

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на AI (AI) представлены ниже.

活动图片