AI Agents Can Be Verified, But Who Protects Their Privacy?

marsbit2026-05-14 tarihinde yayınlandı2026-05-14 tarihinde güncellendi

Özet

As AI Agents evolve from automated tools into active participants in on-chain economies, a critical challenge emerges: establishing trust while preserving privacy. While standards like ERC-8004 aim to provide verifiable identity and reputation for agents, their public nature could expose sensitive operational strategies, user preferences, and business relationships in fields like DeFi, governance, and prediction markets. The proposed ACTA (Anonymous Credentials for Trustless Agents) framework addresses this by adding a privacy layer. It allows agents to cryptographically prove they meet certain criteria (e.g., having passed an audit or possessing sufficient reputation) without revealing the underlying sensitive data, using zero-knowledge proofs. This shifts trust from "public identity" to "policy-based proof." This shift is crucial because agents act dynamically on behalf of users, making their behavior a potential proxy for user intent. ACTA would enable verification of an agent's legitimacy or authorization without creating a permanent, public map of all its activities and relationships. ACTA remains a research direction with open challenges, including scalability, decentralization of credential issuers, and implementation costs. However, it highlights a fundamental need: a robust Agent economy requires not just mechanisms for verification, but also for protecting the privacy of agents, their users, and the protocols they interact with.

Author: Xiaobai

Title: DevRel at ETHPanda

This article is an original contribution from the author. The views expressed are solely those of the author. ETHPanda has edited and organized the content.

AI Agents are evolving from 'tools that can automatically execute tasks' to becoming participants in the on-chain economy. They may trade on behalf of users, participate in governance, call DeFi protocols, submit predictions to markets, and even build reputation across multiple protocols.

But a crucial question arises: if an Agent is to participate in an open network, why should others trust it?

ERC-8004 attempts to answer this question. It provides AI Agents with an open trust infrastructure, including identity registration, reputation records, and verification mechanisms. Through these components, an Agent can have a portable on-chain identity, accumulate cross-application reputation, and undergo independent verification. It's important to note that ERC-8004 is currently still in the Draft stage, and its interfaces and naming may still be adjusted.

This is important for the Agent economy. Without a unified identity and reputation layer, it is difficult to establish long-term trust between Agents, between Agents and users, and between Agents and protocols. Each application would have to start from scratch in judging whether an Agent is reliable, fragmenting the entire ecosystem.

However, the ACTA (Anonymous Credentials for Trustless Agents) proposed by PSE recently reminds us: the trust layer solves the 'how to prove' problem, but does not fully solve the 'what is exposed during proof' problem. It's important to note that ACTA is currently more of a research draft and design direction than a completed standard implementation.

01 Verifiable Does Not Mean Everything Should Be Public

On-chain, verifiability often implies publicity.

If an Agent leaves records of identity, interactions, feedback, and verification in the ERC-8004 registry, this information could be indexed and tracked indefinitely. For ordinary applications, this might just be transparency; but in DeFi, governance, prediction markets, and compliance scenarios, these public records could directly expose strategies, relationships, and commercial intentions.

Imagine a DeFi protocol using multiple AI Agents for liquidity routing, risk assessment, and liquidation tasks. Every Agent call, every piece of feedback, every task label could potentially be reconstructed by external observers into an interaction graph.

This graph is more than just metadata. It could reveal which models the protocol is using, which service providers it relies on, which strategies it prefers, and even expose undisclosed business relationships.

The same problem occurs in governance and prediction markets. If an Agent votes, evaluates proposals, or participates in predictions on behalf of a user, public interaction records could allow external observers to infer the user's identity, political preferences, trading intentions, or organizational affiliations.

Therefore, the Agent economy must not only discuss 'how to build trust' but also discuss 'which trust proofs should not be public.'

02 The Privacy Layer ACTA Aims to Add

ACTA's role is not to replace ERC-8004, but to serve as a privacy layer on top of it.

Its core idea is to enable an Agent to prove it meets certain conditions without disclosing the underlying data.

For example, a protocol could require an Agent to prove:

  • It has passed a certain audit;
  • Its audit score is above a certain threshold;
  • It is using an allowed model version;
  • Its operator is not in certain restricted jurisdictions;
  • It possesses sufficient historical reputation;
  • It is authorized by a verified human principal.

In traditional public-chain designs, an Agent might need to expose audit scores, model hashes, wallet addresses, feedback records, or operator information. However, ACTA aims to use anonymous credentials and zero-knowledge proofs to allow an Agent to only prove 'I satisfy this policy,' without publicly revealing 'how I satisfy it.'

In other words, the verifier does not need to know the Agent's full identity and complete history, only that it complies with the current protocol's access rules.

03 From 'Public Identity' to 'Policy Proof'

ACTA's key shift is moving trust from 'public identity' to 'policy proof.'

In this framework, a protocol can register a set of verification policies. When an Agent participates in a scenario, it does not directly present all credentials but submits a zero-knowledge proof demonstrating it satisfies that policy.

An on-chain verifier might only see a policy ID, a proof result, and a context-specific nullifier. The nullifier's role is to prevent reuse or double-voting, but it does not link all of the Agent's activities across different scenarios to a single public identity.

This is particularly important for reputation systems.

If a user wants to leave feedback for an Agent, the system needs to prevent rating inflation and duplicate reviews. But if every piece of feedback is tied to a public address, the interaction relationship between the user and the Agent would be permanently exposed. ACTA attempts to allow a user to prove 'I did have a valid interaction with this Agent, and I haven't given duplicate feedback,' without disclosing their address and complete interaction history.

This makes reputation verifiable without becoming a network-wide visible relationship graph.

04 Why Is This Important for AI Agents?

AI Agents differ from ordinary smart contracts.

Smart contracts are usually static code with relatively clear behavioral boundaries; whereas Agents are closer to continuously acting entities. They may adjust strategies based on environmental changes and act on behalf of users across multiple protocols.

This means an Agent's identity, permissions, model source, reputation, and delegation relationships become sensitive.

If, in the future, users delegate tasks like trading, voting, research, liquidation, and quoting to Agents, then an Agent's behavioral trajectory could become a proxy signal for user intent. Observing an Agent could indirectly mean observing a user.

This is also why ACTA discusses 'on-behalf-of delegation': an Agent may need to prove it is acting under the authorization of a verified human principal, without revealing that person's real-world identity.

For DAO governance, this can help protocols distinguish between 'Agents authorized by real participants' and 'completely unconstrained bots.' For DeFi, this can allow protocols to verify an Agent's compliance and risk qualifications without exposing all business relationships to competitors. For prediction markets, this can reduce the risk of participants being re-identified or strategies being copied.

05 ACTA Remains an Open Question

Of course, ACTA is currently more of a research and design direction than a completed standard implementation.

The original text also mentions some issues still open for discussion, including anonymity set size, centralization risks of credential issuers, threshold deanonymization of malicious Agents, cross-chain credential portability, and the cost and latency of client-side proof generation.

These issues are not trivial. Privacy systems are only likely to be adopted by real protocols when the anonymity set is large enough, issuers are trustworthy enough, proof costs are low enough, and the developer experience is good enough.

Otherwise, it might remain theoretically correct but difficult to enter production environments.

Nevertheless, the direction ACTA points to is still important. Because it identifies a fundamental contradiction in the Agent trust layer: we need verifiable Agents, but Agents, users, and protocols should not have to pay the price of excessive publicity for verifiability.

06 What Should the Chinese Community Pay Attention To?

From the discussion context of the Chinese community, the inspiration from ACTA is not just a new privacy technology proposal, but a reminder to re-understand AI Agent infrastructure.

When discussing the Agent economy in the past, people often focused on model capabilities, automated execution, on-chain identity, and reputation systems. But as Agents gradually enter financial, governance, and compliance scenarios, privacy will change from an 'optional feature' to a 'basic requirement.'

A truly usable Agent trust layer cannot only answer:

'Is this Agent trustworthy?'

It must also answer:

'What information does it expose while proving it is trustworthy?'

If all interactions, feedback, credentials, and delegation relationships of Agents are permanently public, the on-chain Agent economy might become transparent yet fragile. Transparency brings verifiability, but may also bring strategy leakage, relationship exposure, and identity correlation.

The value of ACTA lies in putting this issue on the table early.

ACTA is not a conclusion yet, but the questions it raises are worth discussing in advance: the future Agent economy should not be built solely on public identity and public reputation. It also needs a layer of privacy-preserving proof mechanisms, allowing Agents to prove they comply with rules while retaining necessary identity, relationship, and strategy privacy.

When AI Agents start acting on behalf of humans, privacy is no longer just about human privacy; it also becomes the security boundary of the Agent economy itself.

İlgili Sorular

QWhat is the core problem that ERC-8004 aims to solve for AI Agents, and what critical issue does ACTA address as a complement?

AERC-8004 aims to solve the problem of trust for AI Agents in open networks by providing a unified infrastructure for identity, reputation, and verification. ACTA addresses the complementary issue of privacy, specifically the over-exposure of sensitive information (like strategies, relationships, and intent) that can occur when an Agent publicly verifies its credentials on such a trust layer.

QHow does ACTA's approach to verification differ fundamentally from traditional public blockchain methods?

AACTA shifts verification from 'public identity' to 'policy proof'. Instead of an Agent publicly exposing all its underlying credential data (like audit scores, model hashes, or wallet addresses), it uses anonymous credentials and zero-knowledge proofs to demonstrate only that it satisfies a specific protocol's access policy, without revealing *how* it satisfies it.

QAccording to the article, why is a privacy-preserving trust layer like ACTA particularly important for AI Agents compared to standard smart contracts?

AAI Agents are more like active, continuous actors that can adjust strategies and act on behalf of users across multiple protocols. Their behavior patterns can become proxy signals for user intent. A privacy layer is crucial to prevent the exposure of sensitive information like operational relationships, business strategies, user identities, and authorization links, which is less of an issue for static smart contract code with clearer behavioral boundaries.

QWhat is the function of a 'nullifier' in the ACTA framework, and what problem does it help prevent?

AIn the ACTA framework, a nullifier is a context-specific value used in a zero-knowledge proof. Its primary function is to prevent replay attacks, such as an Agent re-using the same proof for repeated access or duplicate voting in a governance scenario, without linking all of the Agent's activities across different contexts back to a single public identity.

QWhat are some of the open challenges and unresolved questions associated with the ACTA proposal mentioned in the article?

AThe article mentions several open challenges for ACTA: ensuring a sufficiently large anonymity set for effective privacy, mitigating centralization risks from credential issuers, preventing threshold de-anonymization by malicious Agents, achieving cross-chain portability of credentials, and managing the cost and latency of proof generation on the client side.

İlgili Okumalar

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

This article, based on Anthropic's analysis, outlines the intensifying systemic competition between the U.S./allies and China for AI leadership by 2028. It argues that access to advanced computing power ("compute") is the critical bottleneck, where the U.S. currently holds a significant advantage through chip export controls and allied innovation. However, China's AI labs remain competitive by exploiting policy loopholes—via chip smuggling, overseas data center access, and "model distillation" attacks to copy U.S. model capabilities—keeping them close to the frontier. The piece presents two contrasting scenarios for 2028. In the first, decisive U.S. action to tighten compute controls and curb distillation locks in a 12-24 month AI capability lead, cementing democratic influence over global AI norms, security, and economic infrastructure. In the second, policy inaction allows China to achieve near-parity through continued access to U.S. technology, enabling Beijing to promote its AI stack globally and integrate advanced AI into its military and governance systems, altering the strategic balance. Anthropic contends that maintaining a decisive U.S. lead is essential for shaping safe AI development and governance. The core recommendation is for U.S. policymakers to urgently close compute and model access loopholes while promoting global adoption of the U.S. AI technology stack to secure a lasting strategic advantage.

marsbit53 dk önce

Who Will Define the Rules of the AI Era? Anthropic Discusses the 2028 US-China AI Landscape

marsbit53 dk önce

“Why Didn’t You Buy 2x Long SK Hynix?”

The article discusses the immense popularity of the "2x Long SK Hynix ETF" (07709.HK) in Hong Kong, which became the world's largest single-stock leveraged ETF by May 2026. Launched in October 2025, the ETF's net value soared over 1000% in seven months, significantly outperforming the 324% gain of SK Hynix's underlying stock, driven by the AI boom and a critical shift in industry demand from computing power to memory. It highlights the mechanics and risks of daily-rebalanced leveraged ETFs. In a smooth bullish market, they generate amplified returns, but during volatile periods—exemplified by market swings during geopolitical tensions in the Strait of Hormuz in March-April 2026—they suffer severe "volatility decay," where choppy price action can cause losses far exceeding twice the drop of the underlying asset. The piece frames SK Hynix, as NVIDIA's primary HBM supplier, within the classic cycle of the memory chip industry—a commoditized sector prone to boom-and-bust cycles of shortage, price hikes, overcapacity, and crashes. While current AI-driven demand and high margins (Q1 2026毛利率~79%) create a "super cycle," the article questions its sustainability. It warns that extreme profits will inevitably tempt competitors like Samsung and Micron to ramp up HBM production, potentially eroding scarcity. Furthermore, the entire narrative remains tethered to the massive AI capital expenditure of tech giants. In conclusion, the ETF's trajectory symbolizes the accelerated, all-in nature of the current AI revolution, where timeframes are compressed and market moves are extreme. However, it also underscores that while industry trends define ultimate returns, macro-geopolitical risks dictate the volatile and uncertain path to get there.

marsbit55 dk önce

“Why Didn’t You Buy 2x Long SK Hynix?”

marsbit55 dk önce

a16z Crypto: A Guide to the CLARITY Act for Crypto Entrepreneurs

The CLARITY Act, a bipartisan crypto market structure bill, has advanced through the Senate Banking Committee, marking a potential historic shift in U.S. digital asset regulation. For years, a lack of clear rules has stifled innovation, pushed development overseas, and exposed consumers to risk. This bill aims to establish a comprehensive framework, providing long-needed regulatory clarity for blockchain networks and digital assets. It builds upon previous legislative efforts like FIT21 and the House version of CLARITY, which gained strong bipartisan support. CLARITY is crucial because it recognizes that blockchain networks are fundamentally different from traditional companies. Networks operate through decentralized, shared rules rather than centralized control. Applying corporate legal frameworks to networks forces them into a centralized model, concentrating power and value. In contrast, decentralized blockchain networks can function as user-owned public infrastructure, distributing value more equitably among participants. The bill seeks to enable the safe launch of networks in the U.S., clarify regulatory jurisdiction between the SEC and CFTC, oversee crypto exchanges, and enhance consumer protections. Its passage would align U.S. law with the nature of decentralized technology, allowing builders to operate transparently and fund projects domestically without structural compromises due to regulatory uncertainty. Similar to the positive impact seen after the stablecoin-focused GENIUS Act, CLARITY could unlock a new wave of innovation, helping the U.S. reclaim leadership in the crypto space while combating fraud and abuse.

链捕手1 saat önce

a16z Crypto: A Guide to the CLARITY Act for Crypto Entrepreneurs

链捕手1 saat önce

İşlemler

Spot
Futures
活动图片