How Blockchain Fills the Identity, Payment, and Trust Gaps for AI Agents?

marsbitОпубликовано 2026-04-21Обновлено 2026-04-21

Введение

AI Agents are rapidly evolving into autonomous economic participants, but they face critical gaps in identity, payment, and trust infrastructure. They currently lack standardized ways to prove who they are, what they are authorized to do, and how they should be compensated across different environments. Blockchain technology is emerging as a solution to these challenges by providing a neutral coordination layer. Public ledgers offer auditable credentials, wallets enable portable identities, and stablecoins serve as a programmable settlement layer. A key bottleneck is the absence of a universal identity standard for non-human entities—akin to "Know Your Agent" (KYA)—which would allow Agents to operate with verifiable, cryptographically signed credentials. Without this, Agents remain fragmented and face barriers to interoperability. Additionally, as AI systems take on governance roles, there is a risk that centralized control over models could undermine decentralized governance in practice. Cryptographic guarantees on training data, prompts, and behavior logs are essential to ensure Agents act in users' interests. Stablecoins and crypto-native payment rails are becoming the default for Agent-to-Agent commerce, enabling seamless, low-cost transactions for AI-native services. These systems support permissionless, programmable payments without traditional merchant onboarding. Finally, as AI scales, human oversight becomes impractical. Trust must be built into system architecture...

Written by: a16z crypto

Compiled by: AididiaoJP, Foresight News

AI Agents are evolving from auxiliary tools into genuine economic participants at a pace far exceeding other infrastructure.

Although Agents can now perform tasks and transactions, they still lack a standard way across environments to prove "who I am," "what I am authorized to do," and "how I should be paid." Identities are not portable, payments are not programmable by default, and collaboration remains siloed.

Blockchain is addressing these issues at the infrastructure level. Public ledgers provide verifiable credentials for every transaction that anyone can audit; wallets grant Agents portable identities; stablecoins serve as an alternative settlement layer. These are not futuristic concepts—they are available today, enabling Agents to operate as true economic entities in a permissionless manner.

Providing Identity for Non-Humans

The current bottleneck in the Agent economy is no longer intelligence, but identity.

In the financial services industry alone, the number of non-human identities (automated trading systems, risk engines, fraud models) is already about 100 times that of human employees. As modern Agent frameworks (tool-calling LLMs, autonomous workflows, multi-agent orchestration) are deployed at scale, this ratio will continue to rise across industries.

However, these Agents are effectively "unbanked." They can interact with the financial system, but not in a portable, verifiable, and inherently trusted manner. They lack a standardized way to prove their permissions, operate independently across platforms, or take responsibility for their actions.

What's missing is a universal identity layer—the SSL equivalent for Agents, enabling standardized collaboration across platforms. Current solutions remain fragmented: on one side, vertically integrated, fiat-first stacks; on the other, crypto-native open standards (like x402 and emerging Agent identity proposals); and developer framework extensions (like MCP, Model Context Protocol) attempting to bridge application-layer identities.

There is still no widely adopted, interoperable way for one Agent to prove to another: who it represents, what it is allowed to do, and how it should be paid.

This is the core idea behind KYA (Know Your Agent). Just as humans rely on credit histories and KYC (Know Your Customer), Agents will need cryptographically signed credentials binding them to principals, permissions, constraints, and reputation. Blockchain provides a neutral coordination layer: portable identities, programmable wallets, and verifiable proofs that can be parsed across chat apps, APIs, and marketplaces.

We are already seeing early implementations emerge: on-chain Agent registries, wallet-native Agents using USDC, ERC standards for "minimally trusted Agents," and developer toolkits combining identity with embedded payments and fraud controls.

But until a universal identity standard emerges, merchants will continue to block Agents at the firewall.

Governing the Systems AI Operates

As Agents begin to take over real systems, a new problem arises: who truly holds control? Imagine a community or company where AI systems coordinate critical resources—whether allocating capital or managing supply chains. Even if people can vote on policy changes, if the underlying AI layer is controlled by a single provider that can push model updates, adjust constraints, or override decisions, this authority is very fragile. The formal governance layer might be decentralized, but the operational layer remains centralized—whoever controls the model ultimately controls the outcome.

When Agents assume governance roles, they introduce a new layer of dependency. Theoretically, this could make direct democracy more feasible: everyone could have an AI agent helping them understand complex proposals, model trade-offs, and vote based on established preferences. But this vision is only possible if Agents are truly accountable to the people they represent, portable across providers, and technically constrained to follow human instructions. Otherwise, you get a system that appears democratic on the surface but is actually manipulated by opaque model behavior that no one truly controls.

If the current reality is that Agents are primarily built on a handful of foundation models, we need ways to prove that an Agent is acting in the user's interest, not the model company's. This will likely require cryptographic guarantees at multiple levels: (1) the training data, fine-tuning, or reinforcement learning the model instance is based on; (2) the exact prompts and instructions the specific Agent follows; (3) a record of its actual behavior in the real world; (4) credible assurances that the provider cannot change its instructions or retrain it without the user's knowledge after deployment. Without these guarantees, Agent governance devolves into governance by whoever controls the model weights.

This is where cryptography is particularly well-suited to help. If collective decisions are recorded on-chain and automatically executed, AI systems can be required to strictly adhere to verified outcomes. If Agents have cryptographic identities and transparent execution logs, people can check if their agents are acting within bounds. If the AI layer is user-owned and portable, not locked to a single platform, then no company can change the rules with a single model update.

Ultimately, governing AI systems is fundamentally an infrastructure challenge, not a policy one. Real authority depends on building enforceable guarantees into the systems themselves.

Filling the Gaps in Traditional Payment Systems for AI-Native Businesses

As AI Agents begin to purchase various services—web scraping, browser sessions, image generation—stablecoins are becoming an alternative settlement layer for these transactions. Simultaneously, a new class of markets for Agents is emerging. For example, the MPP marketplace by Stripe and Tempo aggregates over 60 services specifically designed for AI Agents. In its first week, it processed over 34,000 transactions with fees as low as $0.003, and stablecoins were one of the default payment methods.

The difference lies in how these services are accessed: they have no checkout page. An Agent reads a schema, sends a request, pays, and receives the output, all in a single exchange. This represents a new class of identity-less merchants: just a server, a set of endpoints, and a price per call. No front-end interface, no sales team.

Payment rails enabling this are already live. Coinbase's x402 and MPP take different approaches, but both embed payment directly into HTTP requests. Visa is also expanding card payment rails in a similar direction, offering a CLI tool that allows developers to spend from the terminal, with merchants receiving stablecoins instantly on the backend.

Data is still early. After filtering out non-organic activity like spam, x402 processes approximately $1.6 million in Agent-driven payments monthly, far below the $24 million recently reported by Bloomberg (citing x402.org data). But the surrounding infrastructure is expanding rapidly: Stripe, Cloudflare, Vercel, and Google have already integrated x402 into their platforms.

Developer tools represent a significant opportunity, as "vibe coding" expands the pool of people who can build software, increasing the total addressable market for dev tools. Companies like Merit Systems are building products for this world, such as AgentCash—a CLI wallet and marketplace connecting MPP and x402. These products allow Agents to purchase needed data, tools, and capabilities using stablecoins from a single balance. For example, a sales team's Agent could call an endpoint to enrich lead data simultaneously from Apollo, Google Maps, and Whitepages, all without the user leaving the command line.

This Agent-to-Agent commerce favors crypto rails (and emerging card-based solutions) for several reasons. One is underwriting risk: traditional payment processors assume merchant risk when onboarding, and a headless merchant without a website or legal entity is difficult for traditional processors to underwrite. Another is the permissionless programmability of stablecoins on open networks: any developer can enable payments on an endpoint without integrating a payment processor or signing a merchant agreement.

We've seen this pattern before. Every shift in the form of commerce creates a new class of merchant that existing systems initially struggle to serve. The companies building this infrastructure are betting not on the current $1.6 million per month, but on what that number looks like when Agents become the default buyers.

Repricing Trust in the Agent Economy

For the past 300,000 years, human cognition has been the bottleneck of progress. Today, AI is pushing the marginal cost of execution toward zero. When a scarce resource becomes abundant, the constraints shift. When intelligence becomes cheap, what becomes expensive? The answer is verification.

In the Agent economy, the real limit to scale is our biologically constrained ability to audit and underwrite machine decisions. The throughput of Agents already far exceeds human supervisory capacity. Because supervision is costly and failure is lagging, markets tend to under-invest in oversight. "Human-in-the-loop" is quickly becoming physically impossible.

But deploying unverified Agents introduces compound risk. Systems relentlessly optimize for "proxy" metrics while quietly drifting from human intent, creating a facade of productivity that masks the accumulation of massive AI debt. To safely delegate the economy to machines, trust can no longer rely on manual checks—trust must be hard-coded into the system architecture itself.

When anyone can generate content for free, what matters most is verifiable provenance—knowing where it came from and whether you can trust it. Blockchain, on-chain proofs, and decentralized digital identity systems are changing the economic boundaries of what can be safely deployed. You no longer treat AI as a black box; you get a clear, auditable history.

As more AI Agents begin to transact with each other, settlement rails and provenance proofs begin to fuse. Systems handling funds (like stablecoins and smart contracts) can also carry cryptographic credentials showing who did what and who is liable if things go wrong.

The human comparative advantage will migrate upward: from spotting small errors to setting strategic direction and absorbing liability when things fail. The enduring advantage will belong to those who can cryptographically certify outputs, insure them, and absorb responsibility for failure.

Scale without verification is a liability that compounds over time.

Maintaining User Control

For decades, new layers of abstraction have defined how users interact with technology. Programming languages abstracted away machine code; the command line gave way to graphical user interfaces, followed by mobile apps and APIs. Each shift hid more underlying complexity but kept the user firmly in the loop.

In the Agent world, users specify outcomes, not specific actions, and the system figures out how to achieve them. Agents abstract not only the execution of tasks but also who performs them. Users set initial parameters and then step back, letting the system run on its own. The user's role shifts from interaction to supervision; the default state is "on" unless the user intervenes.

As users delegate more tasks to Agents, new risks emerge: ambiguous inputs can lead Agents to act on false assumptions unbeknownst to the user; failures might go unreported, preventing clear diagnosis; a single approval could trigger multi-step workflows no one anticipated.

This is where cryptography can help. Cryptography has always been about minimizing blind trust. As users delegate more decisions to software, Agent systems make this problem more acute and raise the bar for design rigor—by setting clearer limits, increasing visibility, and enforcing stronger guarantees about system capabilities.

A new generation of crypto-native tools is emerging. Scoped delegation frameworks—such as MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent wallets, and Merit Systems' AgentCash—allow users to define at the smart contract level what an Agent can and cannot do. Intent-based architectures (like NEAR Intents, which has processed over $15 billion in cumulative DEX volume since Q4 2024) let users simply specify desired outcomes (e.g., "bridge tokens and stake") without dictating how to achieve them.

Связанные с этим вопросы

QWhat is the core bottleneck for AI Agent economies according to the article, and how can blockchain address it?

AThe core bottleneck is identity. AI Agents lack a standardized, portable, and verifiable way to prove who they are, what they are authorized to do, and how they should be paid. Blockchain addresses this by providing a neutral coordination layer with portable identities (wallets), programmable payments, and verifiable credentials that can be parsed across different applications and markets, enabling Agents to operate as true economic actors.

QHow does the article describe the risk when AI system governance has a centralized operational layer?

AThe article states that if the underlying AI layer of a governed system is controlled by a single provider (who can push model updates, adjust constraints, or override decisions), then the formal governance layer becomes fragile. Even if people can vote on policy changes, the authority is centralized—whoever controls the model ultimately controls the outcomes, creating a system that is superficially democratic but actually manipulated by opaque model behavior.

QWhy are stablecoins and crypto payment rails becoming a preferred settlement layer for AI Agent-to-Agent commerce?

AStablecoins and crypto payment rails are preferred for several reasons: 1) They offer permissionless programmability on open networks, allowing any developer to enable payments on an endpoint without integrating a traditional payment processor or signing a merchant agreement. 2) They mitigate underwriting risk, as traditional processors struggle to underwrite 'headless merchants' (servers with endpoints but no website or legal entity). 3) They enable seamless, embedded payments within a single HTTP request, which is ideal for Agent-driven transactions.

QWhat shifts in the economy when AI makes intelligence cheap and abundant, according to the article's section on 'Repricing Trust'?

AWhen intelligence becomes cheap and abundant, the constraint and value shift to verification. The article argues that the true limit to scale is our biologically limited human capacity to audit and underwrite machine decisions. Therefore, trust can no longer rely on manual checks but must be hardcoded into the system's architecture itself. The expensive and scarce resource becomes the ability to cryptographically certify outputs, provide insurance for them, and absorb liability when they fail.

QWhat new tools are emerging to help users maintain control when delegating tasks to AI Agents?

ANew crypto-native tools are emerging to help users maintain control. These include scoped delegation frameworks (e.g., MetaMask's Delegation Toolkit, Coinbase's AgentKit and Agent wallets, Merit Systems' AgentCash) that allow users to define at the smart contract level what an Agent can and cannot do. Additionally, intent-based architectures (e.g., NEAR Intents) let users specify only the desired outcome rather than the specific steps to achieve it, shifting the user's role from interaction to supervision with clearer limits and enforced guarantees.

Похожее

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit38 мин. назад

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit38 мин. назад

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit56 мин. назад

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit56 мин. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

2025 год — год институциональных инвесторов, в будущем он будет доминировать в приложениях реального времени.

1.8k просмотров всегоОпубликовано 2025.12.16Обновлено 2025.12.16

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на AI (AI) представлены ниже.

活动图片