Your AI Agent is Quietly Changing the Rules of the Internet

比推Published on 2026-03-05Last updated on 2026-03-05

Abstract

AI Agents are rapidly transforming the internet landscape, evolving from experimental tools to essential components in daily operations—managing emails, scheduling meetings, and handling support tickets. By 2025, automated traffic is projected to surpass human activity, accounting for 51% of all web traffic, with AI-driven visits to US retail sites surging by 4,700% year-over-year. However, confidence in fully autonomous agents has declined due to security concerns, as infrastructure struggles to keep pace with their expansion. Key challenges include discoverability—agents must efficiently find machine-readable services amidst web pages designed for humans, prompting a shift from SEO to Agent-Oriented Discoverability (AEO). Identity is critical: agents require cryptographic authentication, delegated authority, and real-world accountability to transact securely, leading to emerging standards like ERC-8004 and protocols such as Visa’s Trusted Agent Protocol. Finally, reputation systems are essential to verify agent performance through methods like trusted execution environments (TEEs), zero-knowledge machine learning (ZKML), and economic security models, enabling portable, auditable records of reliability. Together, discoverability, identity, and reputation form the foundational infrastructure for an agent-driven economy, ensuring agents can operate at scale with trust and autonomy.

Author: Vaidik Mandloi

Original Title: Know Your Agent

Compiled and Arranged by: BitpushNews


The promise that AI Agents will change the internet landscape is gradually becoming a reality. They have moved beyond being experimental tools in chat windows to become an indispensable part of our daily operations—from cleaning up inboxes and scheduling meetings to responding to support tickets. They silently enhance productivity, a change often overlooked.

However, this growth is not merely anecdotal.

By 2025, automated traffic has surpassed human traffic, accounting for 51% of total web activity. AI-driven traffic on US retail websites alone has increased by 4,700% year-over-year. AI agents are now operating across internal systems, many with the ability to access data, trigger workflows, and even initiate transactions.

However, confidence in fully autonomous agents has dropped from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys to authenticate agents, a method never designed for autonomous systems to move value or act independently.

The problem is: agents are scaling faster than the infrastructure designed to govern them.

In response, entirely new protocol stacks are emerging. Stablecoins, card network integrations, and agent-native standards like x402 are enabling machine-initiated transactions. Simultaneously, new identity and verification layers are being developed to help agents identify themselves and operate within structured environments.

But enabling payments is not equivalent to enabling an economy. Because once agents can move value, more fundamental questions arise: How do they discover suitable services in a machine-readable way? How do they prove identity and authorization? How do we verify that the actions they claim to perform actually happened?

This article will explore the infrastructure needed for an agent-driven economy to operate at scale and assess whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.

Agents Cannot Buy What They Cannot See

Before an agent can pay for a service, it must first find that service. This sounds simple but is currently the area of greatest friction.

The internet was built for humans to read pages. When humans search for content, search engines return ranked links. These pages are optimized for persuasion. They are filled with layouts, trackers, ads, navigation bars, and stylistic elements that make sense to humans but are mostly "noise" to machines.

When an agent requests the same page, it receives raw HTML. A typical blog post or product page in this form might require about 16,000 tokens. When converted to a clean Markdown file, the token count drops to about 3,000. This means the model must process 80% less content. For a single request, this difference might be negligible. But when an agent makes thousands of such requests across multiple services, excessive processing compounds into latency, cost, and higher inference complexity.

@Cloudflare

Agents end up spending significant computational effort stripping away interface elements before they can access the core information needed to take action. This effort does not improve output quality; it merely compensates for a web never designed for them.

As agent-driven traffic grows, this inefficiency becomes more apparent. AI-driven crawling of retail and software websites has increased significantly over the past year and now constitutes the majority of total web activity. Meanwhile, about 79% of major news and content websites block at least one AI crawler. From their perspective, this reaction is understandable. Agents extract content without interacting with ads, subscriptions, or traditional conversion funnels. Blocking them is to protect revenue.

The problem is that the web has no reliable way to distinguish between malicious scrapers and legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure. To the system, they look identical.

The deeper issue is that agents are not trying to "consume" pages; they are trying to discover possibilities for action.

When a human searches for "flights under $500," a list of ranked links is sufficient. A person can compare options and make a decision. When an agent receives the same instruction, it needs something completely different. It needs to know which services accept booking requests, what input format is required, how prices are calculated, and whether payment can be settled programmatically. Very few services clearly publish this information.

@TowardsAI

This is why the conversation is shifting from Search Engine Optimization (SEO) to Agent-Oriented Discoverability, often called AEO. If the end-user is an agent, ranking on a search page becomes less important. What matters is whether a service can describe its capabilities in a way that an agent can interpret without guessing. If not, it risks becoming "invisible" in a growing share of economic activity.

Agents Need Identity

@Hackernoon

Once an agent can discover services and initiate transactions, the next major problem is letting the system on the other end know who it is dealing with. In other words: identity.

Today's financial systems run on far more machine identities than human ones. In finance, the ratio of non-human to human identities is approximately 96 to 1. APIs, service accounts, automated scripts, and internal agents dominate institutional infrastructure. Most of them were never designed to have discretion over capital. They execute predefined instructions; they cannot negotiate, choose vendors, or initiate payments on open networks.

Autonomous agents change this boundary. If an agent can directly move stablecoins or trigger a checkout process without manual confirmation, the core question shifts from "Can it pay?" to "Who authorized it to pay?"

This is where identity becomes fundamental, giving rise to the concept of "Know Your Agent" (KYA).

Just as financial institutions verify clients before allowing them to transact, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:

  1. Cryptographic Authenticity: Does this agent actually control the keys it claims to use?

  2. Delegated Authority: Who granted this agent permission, and what are its limits?

  3. Real-World Affiliation: Is this agent linked to a legally accountable entity?

These checks together form the identity stack:

  • The base layer is cryptographic key generation and signing. Standards like ERC-8004 attempt to formalize how agents can anchor identity in a verifiable on-chain registry.

  • The middle layer is the identity provider layer. This binds keys to real-world entities like registered companies, financial institutions, or verified individuals. Without this binding, a signature only proves control, not accountability.

  • The edge layer is the verification infrastructure. Payment processors, CDNs, or application servers verify signatures in real-time, check associated credentials, and enforce permission boundaries. Visa's Trusted Agent Protocol is an example for permitted commerce, allowing merchants to verify an agent is authorized to transact on behalf of a specific user. Stripe's Agent Commerce Protocol (ACP) is pushing similar checks into programmable checkout and stablecoin flows.

Meanwhile, the Universal Commerce Protocol (UCP), led by Google and Shopify, allows merchants to publish "capability manifests" that agents can discover and negotiate with. It acts as an orchestration layer and is expected to integrate with Google Search and Gemini.

@FintechBrainfood

An important nuance is that permissionless and permitted systems will coexist.

On public blockchains, agents can transact without centralized gatekeepers. This increases speed and composability but also intensifies compliance pressure. Stripe's acquisition of Bridge highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations don't disappear just because settlement happens on-chain.

This tension inevitably draws regulators in. Once autonomous agents can initiate financial transactions and interact with markets without direct human supervision, questions of accountability become unavoidable. The financial system cannot allow capital to flow through unidentified or unauthorized actors, even if those actors are pieces of software.

Regulatory frameworks are already being adopted. The Colorado AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automated systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will cease to be optional. If discoverability makes agents visible, identity is the credential that makes them recognized.

Verifying Agent Execution and Reputation

Once agents start performing tasks involving money, contracts, or sensitive information, merely having an identity might not be enough. A verified agent can still hallucinate, misrepresent its work, leak information, or underperform.

Thus, the most critical question becomes: Can it be proven that the agent actually did the work it claims?

If an agent states it analyzed 1,000 documents, detected fraud patterns, or executed a trading strategy, there must be a way to verify that this computation indeed occurred and that the output was not forged or corrupted. For this, we need a performance layer to enable this.

Currently, there are three approaches to achieve this:

  1. TEEs (Trusted Execution Environments): The first approach relies on attestation through hardware like AWS Nitro and Intel SGX. In this model, the agent runs inside a secure enclave that issues cryptographic certificates confirming specific code executed on specific data and was not tampered with. The overhead is usually small (around 5-10% additional latency), acceptable for financial and enterprise-grade use cases where integrity trumps speed.

  2. ZKML (Zero-Knowledge Machine Learning): The second approach is mathematical. ZKML enables agents to generate cryptographic proofs that an output was produced by a specific model without revealing the model weights or private inputs. Lagrange Labs' DeepProve-1 recently demonstrated full zero-knowledge proofs for GPT-2 inference, 54-158 times faster than previous methods.

  3. Restake Security: The third model enforces correctness through economic means rather than computational ones. Protocols like EigenLayer introduce staking-based security, where verifiers stake capital behind an agent's output. If the output is challenged and proven false, the stake is slashed. The system doesn't prove every computation but makes dishonesty economically irrational.

These mechanisms address the same problem from different angles. However, execution proofs are episodic. They verify a single task, but the market needs something cumulative. This is where reputation becomes critical.

Reputation turns isolated proofs into a long-term performance history. Emerging systems aim to make agent performance portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.

The Ethereum Attestation Service (EAS) allows users or services to issue signed, on-chain attestations about an agent's behavior. A successful task completion, an accurate prediction, or a compliant transaction can be recorded in a tamper-resistant way and travel with the agent across applications.

@EAS

Competitive benchmarking environments are also forming. Agent Arenas evaluate agents based on standardized tasks and rank them using scoring systems like Elo. Recall Network reported over 110,000 participants generated 5.88 million predictions, creating measurable performance data. As these systems scale, they begin to resemble real rating markets for AI agents.

This allows reputation to be carried across platforms.

In traditional finance, agencies like Moody's rate bonds to signal creditworthiness. The agent economy will need an equivalent layer to rate non-human actors. The market will need to assess whether an agent is reliable enough to delegate capital to, whether its outputs are statistically consistent, and whether its behavior remains stable over time.

Conclusion

As agents begin to wield real authority, the market will need a clear way to measure their reliability. Agents will carry portable performance records based on verified execution and benchmarking, with scores adjusting for quality decay and permissions traceable to clear authorization. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.

In summary, these layers begin to constitute the infrastructure of the agent economy:

  1. Discoverability: Agents must be able to discover services in a machine-readable way, or they cannot find opportunities.

  2. Identity: Agents must prove who they are and who authorized them, or they cannot enter the system.

  3. Reputation: Agents must establish a verifiable record proving they are trustworthy, thereby earning ongoing economic trust.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original Link:https://www.bitpush.news/articles/7617176

Related Questions

QWhat is the projected percentage of automated traffic on the internet by 2025, and what does this indicate about the role of AI agents?

ABy 2025, automated traffic is projected to exceed human traffic, accounting for 51% of total internet activity. This indicates that AI agents are becoming a dominant force, moving beyond experimental tools to essential components in daily operations, silently enhancing productivity in areas like inbox management, meeting scheduling, and support ticket responses.

QWhat are the three core components of the 'identity stack' required for AI agents to operate securely in economic transactions?

AThe three core components of the identity stack are: 1) Cryptographic authenticity (verifying the agent controls the keys it claims to use), 2) Delegated authority (identifying who granted the agent permission and its limits), and 3) Real-world linkage (connecting the agent to a legally accountable entity).

QWhy is 'Agent-Oriented Discoverability' (AEO) becoming more important than traditional SEO for services interacting with AI agents?

AAgent-Oriented Discoverability (AEO) is becoming more critical than SEO because AI agents need machine-readable descriptions of services' capabilities, input formats, pricing, and programmable payment options—not human-optimized web pages with layouts and ads. Without AEO, services risk becoming 'invisible' to agents, missing out on economic opportunities as automated traffic grows.

QWhat are the three methods mentioned for verifying an AI agent's execution and performance to ensure trustworthiness?

AThe three methods for verifying AI agent execution are: 1) Trusted Execution Environments (TEEs) using hardware like AWS Nitro for encrypted certification, 2) Zero-Knowledge Machine Learning (ZKML) for cryptographic proofs of model output without revealing data, and 3) Restake Security, which uses economic incentives (e.g., staking capital) to penalize dishonest behavior.

QHow does reputation infrastructure for AI agents, such as Ethereum Attestation Service (EAS), contribute to the agent economy?

AReputation infrastructure like Ethereum Attestation Service (EAS) allows signed, on-chain attestations of agent behavior (e.g., successful tasks or accurate predictions), creating a portable, tamper-proof performance history. This enables portable reputations across platforms, helping markets assess reliability for granting access to capital, data, or regulated workflows, similar to credit ratings in traditional finance.

Related Reads

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片