Author: Xiaobai
Title: DevRel at ETHPanda
This article is an original contribution from the author. The views expressed are solely those of the author. ETHPanda has edited and organized the content.
AI Agents are evolving from 'tools that can automatically execute tasks' to becoming participants in the on-chain economy. They may trade on behalf of users, participate in governance, call DeFi protocols, submit predictions to markets, and even build reputation across multiple protocols.
But a crucial question arises: if an Agent is to participate in an open network, why should others trust it?
ERC-8004 attempts to answer this question. It provides AI Agents with an open trust infrastructure, including identity registration, reputation records, and verification mechanisms. Through these components, an Agent can have a portable on-chain identity, accumulate cross-application reputation, and undergo independent verification. It's important to note that ERC-8004 is currently still in the Draft stage, and its interfaces and naming may still be adjusted.
This is important for the Agent economy. Without a unified identity and reputation layer, it is difficult to establish long-term trust between Agents, between Agents and users, and between Agents and protocols. Each application would have to start from scratch in judging whether an Agent is reliable, fragmenting the entire ecosystem.
However, the ACTA (Anonymous Credentials for Trustless Agents) proposed by PSE recently reminds us: the trust layer solves the 'how to prove' problem, but does not fully solve the 'what is exposed during proof' problem. It's important to note that ACTA is currently more of a research draft and design direction than a completed standard implementation.
01 Verifiable Does Not Mean Everything Should Be Public
On-chain, verifiability often implies publicity.
If an Agent leaves records of identity, interactions, feedback, and verification in the ERC-8004 registry, this information could be indexed and tracked indefinitely. For ordinary applications, this might just be transparency; but in DeFi, governance, prediction markets, and compliance scenarios, these public records could directly expose strategies, relationships, and commercial intentions.
Imagine a DeFi protocol using multiple AI Agents for liquidity routing, risk assessment, and liquidation tasks. Every Agent call, every piece of feedback, every task label could potentially be reconstructed by external observers into an interaction graph.
This graph is more than just metadata. It could reveal which models the protocol is using, which service providers it relies on, which strategies it prefers, and even expose undisclosed business relationships.
The same problem occurs in governance and prediction markets. If an Agent votes, evaluates proposals, or participates in predictions on behalf of a user, public interaction records could allow external observers to infer the user's identity, political preferences, trading intentions, or organizational affiliations.
Therefore, the Agent economy must not only discuss 'how to build trust' but also discuss 'which trust proofs should not be public.'
02 The Privacy Layer ACTA Aims to Add
ACTA's role is not to replace ERC-8004, but to serve as a privacy layer on top of it.
Its core idea is to enable an Agent to prove it meets certain conditions without disclosing the underlying data.
For example, a protocol could require an Agent to prove:
- It has passed a certain audit;
- Its audit score is above a certain threshold;
- It is using an allowed model version;
- Its operator is not in certain restricted jurisdictions;
- It possesses sufficient historical reputation;
- It is authorized by a verified human principal.
In traditional public-chain designs, an Agent might need to expose audit scores, model hashes, wallet addresses, feedback records, or operator information. However, ACTA aims to use anonymous credentials and zero-knowledge proofs to allow an Agent to only prove 'I satisfy this policy,' without publicly revealing 'how I satisfy it.'
In other words, the verifier does not need to know the Agent's full identity and complete history, only that it complies with the current protocol's access rules.
03 From 'Public Identity' to 'Policy Proof'
ACTA's key shift is moving trust from 'public identity' to 'policy proof.'
In this framework, a protocol can register a set of verification policies. When an Agent participates in a scenario, it does not directly present all credentials but submits a zero-knowledge proof demonstrating it satisfies that policy.
An on-chain verifier might only see a policy ID, a proof result, and a context-specific nullifier. The nullifier's role is to prevent reuse or double-voting, but it does not link all of the Agent's activities across different scenarios to a single public identity.
This is particularly important for reputation systems.
If a user wants to leave feedback for an Agent, the system needs to prevent rating inflation and duplicate reviews. But if every piece of feedback is tied to a public address, the interaction relationship between the user and the Agent would be permanently exposed. ACTA attempts to allow a user to prove 'I did have a valid interaction with this Agent, and I haven't given duplicate feedback,' without disclosing their address and complete interaction history.
This makes reputation verifiable without becoming a network-wide visible relationship graph.
04 Why Is This Important for AI Agents?
AI Agents differ from ordinary smart contracts.
Smart contracts are usually static code with relatively clear behavioral boundaries; whereas Agents are closer to continuously acting entities. They may adjust strategies based on environmental changes and act on behalf of users across multiple protocols.
This means an Agent's identity, permissions, model source, reputation, and delegation relationships become sensitive.
If, in the future, users delegate tasks like trading, voting, research, liquidation, and quoting to Agents, then an Agent's behavioral trajectory could become a proxy signal for user intent. Observing an Agent could indirectly mean observing a user.
This is also why ACTA discusses 'on-behalf-of delegation': an Agent may need to prove it is acting under the authorization of a verified human principal, without revealing that person's real-world identity.
For DAO governance, this can help protocols distinguish between 'Agents authorized by real participants' and 'completely unconstrained bots.' For DeFi, this can allow protocols to verify an Agent's compliance and risk qualifications without exposing all business relationships to competitors. For prediction markets, this can reduce the risk of participants being re-identified or strategies being copied.
05 ACTA Remains an Open Question
Of course, ACTA is currently more of a research and design direction than a completed standard implementation.
The original text also mentions some issues still open for discussion, including anonymity set size, centralization risks of credential issuers, threshold deanonymization of malicious Agents, cross-chain credential portability, and the cost and latency of client-side proof generation.
These issues are not trivial. Privacy systems are only likely to be adopted by real protocols when the anonymity set is large enough, issuers are trustworthy enough, proof costs are low enough, and the developer experience is good enough.
Otherwise, it might remain theoretically correct but difficult to enter production environments.
Nevertheless, the direction ACTA points to is still important. Because it identifies a fundamental contradiction in the Agent trust layer: we need verifiable Agents, but Agents, users, and protocols should not have to pay the price of excessive publicity for verifiability.
06 What Should the Chinese Community Pay Attention To?
From the discussion context of the Chinese community, the inspiration from ACTA is not just a new privacy technology proposal, but a reminder to re-understand AI Agent infrastructure.
When discussing the Agent economy in the past, people often focused on model capabilities, automated execution, on-chain identity, and reputation systems. But as Agents gradually enter financial, governance, and compliance scenarios, privacy will change from an 'optional feature' to a 'basic requirement.'
A truly usable Agent trust layer cannot only answer:
'Is this Agent trustworthy?'
It must also answer:
'What information does it expose while proving it is trustworthy?'
If all interactions, feedback, credentials, and delegation relationships of Agents are permanently public, the on-chain Agent economy might become transparent yet fragile. Transparency brings verifiability, but may also bring strategy leakage, relationship exposure, and identity correlation.
The value of ACTA lies in putting this issue on the table early.
ACTA is not a conclusion yet, but the questions it raises are worth discussing in advance: the future Agent economy should not be built solely on public identity and public reputation. It also needs a layer of privacy-preserving proof mechanisms, allowing Agents to prove they comply with rules while retaining necessary identity, relationship, and strategy privacy.
When AI Agents start acting on behalf of humans, privacy is no longer just about human privacy; it also becomes the security boundary of the Agent economy itself.





