AI Agents Can Be Verified, But Who Protects Their Privacy?

marsbitОпубликовано 2026-05-14Обновлено 2026-05-14

Введение

As AI Agents evolve from automated tools into active participants in on-chain economies, a critical challenge emerges: establishing trust while preserving privacy. While standards like ERC-8004 aim to provide verifiable identity and reputation for agents, their public nature could expose sensitive operational strategies, user preferences, and business relationships in fields like DeFi, governance, and prediction markets. The proposed ACTA (Anonymous Credentials for Trustless Agents) framework addresses this by adding a privacy layer. It allows agents to cryptographically prove they meet certain criteria (e.g., having passed an audit or possessing sufficient reputation) without revealing the underlying sensitive data, using zero-knowledge proofs. This shifts trust from "public identity" to "policy-based proof." This shift is crucial because agents act dynamically on behalf of users, making their behavior a potential proxy for user intent. ACTA would enable verification of an agent's legitimacy or authorization without creating a permanent, public map of all its activities and relationships. ACTA remains a research direction with open challenges, including scalability, decentralization of credential issuers, and implementation costs. However, it highlights a fundamental need: a robust Agent economy requires not just mechanisms for verification, but also for protecting the privacy of agents, their users, and the protocols they interact with.

Author: Xiaobai

Title: DevRel at ETHPanda

This article is an original contribution from the author. The views expressed are solely those of the author. ETHPanda has edited and organized the content.

AI Agents are evolving from 'tools that can automatically execute tasks' to becoming participants in the on-chain economy. They may trade on behalf of users, participate in governance, call DeFi protocols, submit predictions to markets, and even build reputation across multiple protocols.

But a crucial question arises: if an Agent is to participate in an open network, why should others trust it?

ERC-8004 attempts to answer this question. It provides AI Agents with an open trust infrastructure, including identity registration, reputation records, and verification mechanisms. Through these components, an Agent can have a portable on-chain identity, accumulate cross-application reputation, and undergo independent verification. It's important to note that ERC-8004 is currently still in the Draft stage, and its interfaces and naming may still be adjusted.

This is important for the Agent economy. Without a unified identity and reputation layer, it is difficult to establish long-term trust between Agents, between Agents and users, and between Agents and protocols. Each application would have to start from scratch in judging whether an Agent is reliable, fragmenting the entire ecosystem.

However, the ACTA (Anonymous Credentials for Trustless Agents) proposed by PSE recently reminds us: the trust layer solves the 'how to prove' problem, but does not fully solve the 'what is exposed during proof' problem. It's important to note that ACTA is currently more of a research draft and design direction than a completed standard implementation.

01 Verifiable Does Not Mean Everything Should Be Public

On-chain, verifiability often implies publicity.

If an Agent leaves records of identity, interactions, feedback, and verification in the ERC-8004 registry, this information could be indexed and tracked indefinitely. For ordinary applications, this might just be transparency; but in DeFi, governance, prediction markets, and compliance scenarios, these public records could directly expose strategies, relationships, and commercial intentions.

Imagine a DeFi protocol using multiple AI Agents for liquidity routing, risk assessment, and liquidation tasks. Every Agent call, every piece of feedback, every task label could potentially be reconstructed by external observers into an interaction graph.

This graph is more than just metadata. It could reveal which models the protocol is using, which service providers it relies on, which strategies it prefers, and even expose undisclosed business relationships.

The same problem occurs in governance and prediction markets. If an Agent votes, evaluates proposals, or participates in predictions on behalf of a user, public interaction records could allow external observers to infer the user's identity, political preferences, trading intentions, or organizational affiliations.

Therefore, the Agent economy must not only discuss 'how to build trust' but also discuss 'which trust proofs should not be public.'

02 The Privacy Layer ACTA Aims to Add

ACTA's role is not to replace ERC-8004, but to serve as a privacy layer on top of it.

Its core idea is to enable an Agent to prove it meets certain conditions without disclosing the underlying data.

For example, a protocol could require an Agent to prove:

  • It has passed a certain audit;
  • Its audit score is above a certain threshold;
  • It is using an allowed model version;
  • Its operator is not in certain restricted jurisdictions;
  • It possesses sufficient historical reputation;
  • It is authorized by a verified human principal.

In traditional public-chain designs, an Agent might need to expose audit scores, model hashes, wallet addresses, feedback records, or operator information. However, ACTA aims to use anonymous credentials and zero-knowledge proofs to allow an Agent to only prove 'I satisfy this policy,' without publicly revealing 'how I satisfy it.'

In other words, the verifier does not need to know the Agent's full identity and complete history, only that it complies with the current protocol's access rules.

03 From 'Public Identity' to 'Policy Proof'

ACTA's key shift is moving trust from 'public identity' to 'policy proof.'

In this framework, a protocol can register a set of verification policies. When an Agent participates in a scenario, it does not directly present all credentials but submits a zero-knowledge proof demonstrating it satisfies that policy.

An on-chain verifier might only see a policy ID, a proof result, and a context-specific nullifier. The nullifier's role is to prevent reuse or double-voting, but it does not link all of the Agent's activities across different scenarios to a single public identity.

This is particularly important for reputation systems.

If a user wants to leave feedback for an Agent, the system needs to prevent rating inflation and duplicate reviews. But if every piece of feedback is tied to a public address, the interaction relationship between the user and the Agent would be permanently exposed. ACTA attempts to allow a user to prove 'I did have a valid interaction with this Agent, and I haven't given duplicate feedback,' without disclosing their address and complete interaction history.

This makes reputation verifiable without becoming a network-wide visible relationship graph.

04 Why Is This Important for AI Agents?

AI Agents differ from ordinary smart contracts.

Smart contracts are usually static code with relatively clear behavioral boundaries; whereas Agents are closer to continuously acting entities. They may adjust strategies based on environmental changes and act on behalf of users across multiple protocols.

This means an Agent's identity, permissions, model source, reputation, and delegation relationships become sensitive.

If, in the future, users delegate tasks like trading, voting, research, liquidation, and quoting to Agents, then an Agent's behavioral trajectory could become a proxy signal for user intent. Observing an Agent could indirectly mean observing a user.

This is also why ACTA discusses 'on-behalf-of delegation': an Agent may need to prove it is acting under the authorization of a verified human principal, without revealing that person's real-world identity.

For DAO governance, this can help protocols distinguish between 'Agents authorized by real participants' and 'completely unconstrained bots.' For DeFi, this can allow protocols to verify an Agent's compliance and risk qualifications without exposing all business relationships to competitors. For prediction markets, this can reduce the risk of participants being re-identified or strategies being copied.

05 ACTA Remains an Open Question

Of course, ACTA is currently more of a research and design direction than a completed standard implementation.

The original text also mentions some issues still open for discussion, including anonymity set size, centralization risks of credential issuers, threshold deanonymization of malicious Agents, cross-chain credential portability, and the cost and latency of client-side proof generation.

These issues are not trivial. Privacy systems are only likely to be adopted by real protocols when the anonymity set is large enough, issuers are trustworthy enough, proof costs are low enough, and the developer experience is good enough.

Otherwise, it might remain theoretically correct but difficult to enter production environments.

Nevertheless, the direction ACTA points to is still important. Because it identifies a fundamental contradiction in the Agent trust layer: we need verifiable Agents, but Agents, users, and protocols should not have to pay the price of excessive publicity for verifiability.

06 What Should the Chinese Community Pay Attention To?

From the discussion context of the Chinese community, the inspiration from ACTA is not just a new privacy technology proposal, but a reminder to re-understand AI Agent infrastructure.

When discussing the Agent economy in the past, people often focused on model capabilities, automated execution, on-chain identity, and reputation systems. But as Agents gradually enter financial, governance, and compliance scenarios, privacy will change from an 'optional feature' to a 'basic requirement.'

A truly usable Agent trust layer cannot only answer:

'Is this Agent trustworthy?'

It must also answer:

'What information does it expose while proving it is trustworthy?'

If all interactions, feedback, credentials, and delegation relationships of Agents are permanently public, the on-chain Agent economy might become transparent yet fragile. Transparency brings verifiability, but may also bring strategy leakage, relationship exposure, and identity correlation.

The value of ACTA lies in putting this issue on the table early.

ACTA is not a conclusion yet, but the questions it raises are worth discussing in advance: the future Agent economy should not be built solely on public identity and public reputation. It also needs a layer of privacy-preserving proof mechanisms, allowing Agents to prove they comply with rules while retaining necessary identity, relationship, and strategy privacy.

When AI Agents start acting on behalf of humans, privacy is no longer just about human privacy; it also becomes the security boundary of the Agent economy itself.

Связанные с этим вопросы

QWhat is the core problem that ERC-8004 aims to solve for AI Agents, and what critical issue does ACTA address as a complement?

AERC-8004 aims to solve the problem of trust for AI Agents in open networks by providing a unified infrastructure for identity, reputation, and verification. ACTA addresses the complementary issue of privacy, specifically the over-exposure of sensitive information (like strategies, relationships, and intent) that can occur when an Agent publicly verifies its credentials on such a trust layer.

QHow does ACTA's approach to verification differ fundamentally from traditional public blockchain methods?

AACTA shifts verification from 'public identity' to 'policy proof'. Instead of an Agent publicly exposing all its underlying credential data (like audit scores, model hashes, or wallet addresses), it uses anonymous credentials and zero-knowledge proofs to demonstrate only that it satisfies a specific protocol's access policy, without revealing *how* it satisfies it.

QAccording to the article, why is a privacy-preserving trust layer like ACTA particularly important for AI Agents compared to standard smart contracts?

AAI Agents are more like active, continuous actors that can adjust strategies and act on behalf of users across multiple protocols. Their behavior patterns can become proxy signals for user intent. A privacy layer is crucial to prevent the exposure of sensitive information like operational relationships, business strategies, user identities, and authorization links, which is less of an issue for static smart contract code with clearer behavioral boundaries.

QWhat is the function of a 'nullifier' in the ACTA framework, and what problem does it help prevent?

AIn the ACTA framework, a nullifier is a context-specific value used in a zero-knowledge proof. Its primary function is to prevent replay attacks, such as an Agent re-using the same proof for repeated access or duplicate voting in a governance scenario, without linking all of the Agent's activities across different contexts back to a single public identity.

QWhat are some of the open challenges and unresolved questions associated with the ACTA proposal mentioned in the article?

AThe article mentions several open challenges for ACTA: ensuring a sufficiently large anonymity set for effective privacy, mitigating centralization risks from credential issuers, preventing threshold de-anonymization by malicious Agents, achieving cross-chain portability of credentials, and managing the cost and latency of proof generation on the client side.

Похожее

Winter for Crypto IPOs: Consensys and Ledger Withdraw Applications

The crypto IPO window is tightening significantly in 2026, marked by prominent companies delaying or pausing their public listing plans. Following a successful 2025 "harvest year" that saw Circle, Bullish, and Gemini go public amidst a bull market, the tide has turned. Consensys, developer of MetaMask, recently postponed its IPO until at least fall 2026. Hardware wallet leader Ledger also suspended its planned US listing due to unfavorable market conditions, with Kraken having previously delayed its own plans. This shift is driven by a cooling market in 2026, characterized by a significant Bitcoin price correction, declining trading volumes, and reduced investor risk appetite for crypto stocks. The poor post-IPO performance of 2025 listings like Circle and Bullish, which saw major share price declines, has heightened investor caution. This contrasts sharply with the current AI sector, where companies like SpaceX, OpenAI, and Anthropic are commanding massive valuations and investor enthusiasm based on narratives of stable, exponential growth. Crypto companies now face pressure to transition from hype-driven models to demonstrating reliable cash flows and robust compliance. While the paused IPO plans may lead to valuation resets and affect ecosystem liquidity, they also accelerate industry consolidation toward stronger, more compliant infrastructure players. A potential recovery in Bitcoin's price and clearer regulations could reopen the IPO window in the latter half of 2026.

marsbit1 ч. назад

Winter for Crypto IPOs: Consensys and Ledger Withdraw Applications

marsbit1 ч. назад

ChatGPT Can Manage Your Money for You. Would You Trust It with Your Bank Account?

OpenAI has launched a personal finance tool for ChatGPT, currently in preview for US-based ChatGPT Pro users. This feature allows users to connect their bank and investment accounts (via Plaid, supporting over 12,000 institutions) directly to ChatGPT. It analyzes transactions, generates visual dashboards, and offers conversational financial advice—such as budgeting or planning for major purchases—based on the user's actual data. This move follows OpenAI's acquisitions of fintech startups Roi and Hiro Finance, signaling a strategic push into vertical "super assistant" applications, similar to its earlier health-focused feature. However, the launch has sparked significant privacy concerns. Critics question the safety of granting such sensitive financial access to an AI, especially amid ongoing lawsuits alleging OpenAI shared user chat data with third parties like Meta and Google. OpenAI emphasizes that ChatGPT only reads data (no transaction capabilities), deletes it within 30 days if disconnected, and offers opt-out options for model training. Yet, trust remains a major hurdle. The trend reflects a broader industry shift: AI companies like Anthropic and Perplexity are also targeting high-value, data-rich domains like finance and health. While technically promising, the tool operates in a regulatory gray area—it provides personalized guidance but disclaims formal financial advice or liability. Ultimately, OpenAI's challenge is convincing users to trust an AI with their most private financial information.

marsbit1 ч. назад

ChatGPT Can Manage Your Money for You. Would You Trust It with Your Bank Account?

marsbit1 ч. назад

Торговля

Спот
Фьючерсы
活动图片