Your AI Agent is Quietly Changing the Rules of the Internet

比推Published on 2026-03-05Last updated on 2026-03-05

Abstract

AI Agents are rapidly transforming the internet landscape, evolving from experimental tools to essential components in daily operations—managing emails, scheduling meetings, and handling support tickets. By 2025, automated traffic is projected to surpass human activity, accounting for 51% of all web traffic, with AI-driven visits to US retail sites surging by 4,700% year-over-year. However, confidence in fully autonomous agents has declined due to security concerns, as infrastructure struggles to keep pace with their expansion. Key challenges include discoverability—agents must efficiently find machine-readable services amidst web pages designed for humans, prompting a shift from SEO to Agent-Oriented Discoverability (AEO). Identity is critical: agents require cryptographic authentication, delegated authority, and real-world accountability to transact securely, leading to emerging standards like ERC-8004 and protocols such as Visa’s Trusted Agent Protocol. Finally, reputation systems are essential to verify agent performance through methods like trusted execution environments (TEEs), zero-knowledge machine learning (ZKML), and economic security models, enabling portable, auditable records of reliability. Together, discoverability, identity, and reputation form the foundational infrastructure for an agent-driven economy, ensuring agents can operate at scale with trust and autonomy.

Author: Vaidik Mandloi

Original Title: Know Your Agent

Compiled and Arranged by: BitpushNews


The promise that AI Agents will change the internet landscape is gradually becoming a reality. They have moved beyond being experimental tools in chat windows to become an indispensable part of our daily operations—from cleaning up inboxes and scheduling meetings to responding to support tickets. They silently enhance productivity, a change often overlooked.

However, this growth is not merely anecdotal.

By 2025, automated traffic has surpassed human traffic, accounting for 51% of total web activity. AI-driven traffic on US retail websites alone has increased by 4,700% year-over-year. AI agents are now operating across internal systems, many with the ability to access data, trigger workflows, and even initiate transactions.

However, confidence in fully autonomous agents has dropped from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys to authenticate agents, a method never designed for autonomous systems to move value or act independently.

The problem is: agents are scaling faster than the infrastructure designed to govern them.

In response, entirely new protocol stacks are emerging. Stablecoins, card network integrations, and agent-native standards like x402 are enabling machine-initiated transactions. Simultaneously, new identity and verification layers are being developed to help agents identify themselves and operate within structured environments.

But enabling payments is not equivalent to enabling an economy. Because once agents can move value, more fundamental questions arise: How do they discover suitable services in a machine-readable way? How do they prove identity and authorization? How do we verify that the actions they claim to perform actually happened?

This article will explore the infrastructure needed for an agent-driven economy to operate at scale and assess whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.

Agents Cannot Buy What They Cannot See

Before an agent can pay for a service, it must first find that service. This sounds simple but is currently the area of greatest friction.

The internet was built for humans to read pages. When humans search for content, search engines return ranked links. These pages are optimized for persuasion. They are filled with layouts, trackers, ads, navigation bars, and stylistic elements that make sense to humans but are mostly "noise" to machines.

When an agent requests the same page, it receives raw HTML. A typical blog post or product page in this form might require about 16,000 tokens. When converted to a clean Markdown file, the token count drops to about 3,000. This means the model must process 80% less content. For a single request, this difference might be negligible. But when an agent makes thousands of such requests across multiple services, excessive processing compounds into latency, cost, and higher inference complexity.

@Cloudflare

Agents end up spending significant computational effort stripping away interface elements before they can access the core information needed to take action. This effort does not improve output quality; it merely compensates for a web never designed for them.

As agent-driven traffic grows, this inefficiency becomes more apparent. AI-driven crawling of retail and software websites has increased significantly over the past year and now constitutes the majority of total web activity. Meanwhile, about 79% of major news and content websites block at least one AI crawler. From their perspective, this reaction is understandable. Agents extract content without interacting with ads, subscriptions, or traditional conversion funnels. Blocking them is to protect revenue.

The problem is that the web has no reliable way to distinguish between malicious scrapers and legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure. To the system, they look identical.

The deeper issue is that agents are not trying to "consume" pages; they are trying to discover possibilities for action.

When a human searches for "flights under $500," a list of ranked links is sufficient. A person can compare options and make a decision. When an agent receives the same instruction, it needs something completely different. It needs to know which services accept booking requests, what input format is required, how prices are calculated, and whether payment can be settled programmatically. Very few services clearly publish this information.

@TowardsAI

This is why the conversation is shifting from Search Engine Optimization (SEO) to Agent-Oriented Discoverability, often called AEO. If the end-user is an agent, ranking on a search page becomes less important. What matters is whether a service can describe its capabilities in a way that an agent can interpret without guessing. If not, it risks becoming "invisible" in a growing share of economic activity.

Agents Need Identity

@Hackernoon

Once an agent can discover services and initiate transactions, the next major problem is letting the system on the other end know who it is dealing with. In other words: identity.

Today's financial systems run on far more machine identities than human ones. In finance, the ratio of non-human to human identities is approximately 96 to 1. APIs, service accounts, automated scripts, and internal agents dominate institutional infrastructure. Most of them were never designed to have discretion over capital. They execute predefined instructions; they cannot negotiate, choose vendors, or initiate payments on open networks.

Autonomous agents change this boundary. If an agent can directly move stablecoins or trigger a checkout process without manual confirmation, the core question shifts from "Can it pay?" to "Who authorized it to pay?"

This is where identity becomes fundamental, giving rise to the concept of "Know Your Agent" (KYA).

Just as financial institutions verify clients before allowing them to transact, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:

  1. Cryptographic Authenticity: Does this agent actually control the keys it claims to use?

  2. Delegated Authority: Who granted this agent permission, and what are its limits?

  3. Real-World Affiliation: Is this agent linked to a legally accountable entity?

These checks together form the identity stack:

  • The base layer is cryptographic key generation and signing. Standards like ERC-8004 attempt to formalize how agents can anchor identity in a verifiable on-chain registry.

  • The middle layer is the identity provider layer. This binds keys to real-world entities like registered companies, financial institutions, or verified individuals. Without this binding, a signature only proves control, not accountability.

  • The edge layer is the verification infrastructure. Payment processors, CDNs, or application servers verify signatures in real-time, check associated credentials, and enforce permission boundaries. Visa's Trusted Agent Protocol is an example for permitted commerce, allowing merchants to verify an agent is authorized to transact on behalf of a specific user. Stripe's Agent Commerce Protocol (ACP) is pushing similar checks into programmable checkout and stablecoin flows.

Meanwhile, the Universal Commerce Protocol (UCP), led by Google and Shopify, allows merchants to publish "capability manifests" that agents can discover and negotiate with. It acts as an orchestration layer and is expected to integrate with Google Search and Gemini.

@FintechBrainfood

An important nuance is that permissionless and permitted systems will coexist.

On public blockchains, agents can transact without centralized gatekeepers. This increases speed and composability but also intensifies compliance pressure. Stripe's acquisition of Bridge highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations don't disappear just because settlement happens on-chain.

This tension inevitably draws regulators in. Once autonomous agents can initiate financial transactions and interact with markets without direct human supervision, questions of accountability become unavoidable. The financial system cannot allow capital to flow through unidentified or unauthorized actors, even if those actors are pieces of software.

Regulatory frameworks are already being adopted. The Colorado AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automated systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will cease to be optional. If discoverability makes agents visible, identity is the credential that makes them recognized.

Verifying Agent Execution and Reputation

Once agents start performing tasks involving money, contracts, or sensitive information, merely having an identity might not be enough. A verified agent can still hallucinate, misrepresent its work, leak information, or underperform.

Thus, the most critical question becomes: Can it be proven that the agent actually did the work it claims?

If an agent states it analyzed 1,000 documents, detected fraud patterns, or executed a trading strategy, there must be a way to verify that this computation indeed occurred and that the output was not forged or corrupted. For this, we need a performance layer to enable this.

Currently, there are three approaches to achieve this:

  1. TEEs (Trusted Execution Environments): The first approach relies on attestation through hardware like AWS Nitro and Intel SGX. In this model, the agent runs inside a secure enclave that issues cryptographic certificates confirming specific code executed on specific data and was not tampered with. The overhead is usually small (around 5-10% additional latency), acceptable for financial and enterprise-grade use cases where integrity trumps speed.

  2. ZKML (Zero-Knowledge Machine Learning): The second approach is mathematical. ZKML enables agents to generate cryptographic proofs that an output was produced by a specific model without revealing the model weights or private inputs. Lagrange Labs' DeepProve-1 recently demonstrated full zero-knowledge proofs for GPT-2 inference, 54-158 times faster than previous methods.

  3. Restake Security: The third model enforces correctness through economic means rather than computational ones. Protocols like EigenLayer introduce staking-based security, where verifiers stake capital behind an agent's output. If the output is challenged and proven false, the stake is slashed. The system doesn't prove every computation but makes dishonesty economically irrational.

These mechanisms address the same problem from different angles. However, execution proofs are episodic. They verify a single task, but the market needs something cumulative. This is where reputation becomes critical.

Reputation turns isolated proofs into a long-term performance history. Emerging systems aim to make agent performance portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.

The Ethereum Attestation Service (EAS) allows users or services to issue signed, on-chain attestations about an agent's behavior. A successful task completion, an accurate prediction, or a compliant transaction can be recorded in a tamper-resistant way and travel with the agent across applications.

@EAS

Competitive benchmarking environments are also forming. Agent Arenas evaluate agents based on standardized tasks and rank them using scoring systems like Elo. Recall Network reported over 110,000 participants generated 5.88 million predictions, creating measurable performance data. As these systems scale, they begin to resemble real rating markets for AI agents.

This allows reputation to be carried across platforms.

In traditional finance, agencies like Moody's rate bonds to signal creditworthiness. The agent economy will need an equivalent layer to rate non-human actors. The market will need to assess whether an agent is reliable enough to delegate capital to, whether its outputs are statistically consistent, and whether its behavior remains stable over time.

Conclusion

As agents begin to wield real authority, the market will need a clear way to measure their reliability. Agents will carry portable performance records based on verified execution and benchmarking, with scores adjusting for quality decay and permissions traceable to clear authorization. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.

In summary, these layers begin to constitute the infrastructure of the agent economy:

  1. Discoverability: Agents must be able to discover services in a machine-readable way, or they cannot find opportunities.

  2. Identity: Agents must prove who they are and who authorized them, or they cannot enter the system.

  3. Reputation: Agents must establish a verifiable record proving they are trustworthy, thereby earning ongoing economic trust.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original Link:https://www.bitpush.news/articles/7617176

Related Questions

QWhat is the projected percentage of automated traffic on the internet by 2025, and what does this indicate about the role of AI agents?

ABy 2025, automated traffic is projected to exceed human traffic, accounting for 51% of total internet activity. This indicates that AI agents are becoming a dominant force, moving beyond experimental tools to essential components in daily operations, silently enhancing productivity in areas like inbox management, meeting scheduling, and support ticket responses.

QWhat are the three core components of the 'identity stack' required for AI agents to operate securely in economic transactions?

AThe three core components of the identity stack are: 1) Cryptographic authenticity (verifying the agent controls the keys it claims to use), 2) Delegated authority (identifying who granted the agent permission and its limits), and 3) Real-world linkage (connecting the agent to a legally accountable entity).

QWhy is 'Agent-Oriented Discoverability' (AEO) becoming more important than traditional SEO for services interacting with AI agents?

AAgent-Oriented Discoverability (AEO) is becoming more critical than SEO because AI agents need machine-readable descriptions of services' capabilities, input formats, pricing, and programmable payment options—not human-optimized web pages with layouts and ads. Without AEO, services risk becoming 'invisible' to agents, missing out on economic opportunities as automated traffic grows.

QWhat are the three methods mentioned for verifying an AI agent's execution and performance to ensure trustworthiness?

AThe three methods for verifying AI agent execution are: 1) Trusted Execution Environments (TEEs) using hardware like AWS Nitro for encrypted certification, 2) Zero-Knowledge Machine Learning (ZKML) for cryptographic proofs of model output without revealing data, and 3) Restake Security, which uses economic incentives (e.g., staking capital) to penalize dishonest behavior.

QHow does reputation infrastructure for AI agents, such as Ethereum Attestation Service (EAS), contribute to the agent economy?

AReputation infrastructure like Ethereum Attestation Service (EAS) allows signed, on-chain attestations of agent behavior (e.g., successful tasks or accurate predictions), creating a portable, tamper-proof performance history. This enables portable reputations across platforms, helping markets assess reliability for granting access to capital, data, or regulated workflows, similar to credit ratings in traditional finance.

Related Reads

Beaten SK Hynix Employees in China: Year-end Bonus Less Than 5% of Korean Staff's

"SK Hynix Chinese Staff Hit Hard: Bonuses Less Than 5% of Korean Counterparts" Driven by the AI boom, South Korea's SK Hynix is experiencing record performance, with media reports predicting massive year-end bonuses for its employees, making them highly desirable in the matchmaking market. However, this prosperity starkly contrasts with the situation for the company's Chinese employees. According to reports, SK Hynix operates under a rule allocating 10% of operating profit for employee bonuses. While projections suggest Korean employees could receive bonuses reaching millions of RMB, a Chinese employee with over a decade of technical experience revealed the disparity: "If they get 3 million, Chinese staff get less than 5% of that." After adjustments based on KPI ratings, this employee's highest bonus was slightly over 100,000 RMB. Bonuses are paid annually in Korea but semi-annually in China. During the industry downturn in 2023-2024, Chinese employees received no bonus at all. The gap extends beyond bonuses. Recruitment posts for SK Hynix's Chinese factories (in Wuxi, Dalian, Chongqing) show engineer monthly salaries ranging from 10,000 to 35,000 RMB, with a 13th-month salary promised. Chinese employees also receive standard benefits like annual leave but lack stock incentives, which are reportedly unavailable to them. Furthermore, management positions in China are predominantly held by Korean personnel, though industry observers note a gradual increase in local middle managers over time. SK Hynix has confirmed the 10% bonus rule but cautioned that specific future bonus amounts remain unpredictable. The company forecasts strong demand for HBM and other high-value enterprise products for the next 2-3 years, driven by AI infrastructure investment. This focus on business-to-business markets may continue to constrain supply for consumer products, potentially prolonging price increases for components like memory.

链捕手14m ago

Beaten SK Hynix Employees in China: Year-end Bonus Less Than 5% of Korean Staff's

链捕手14m ago

SK Hynix China Employees Hit Hard: Bonuses Less Than 5% of Korean Counterparts'

"SK Hynix's Staggering Bonus Gap: Chinese Staff Receive Less Than 5% of Korean Counterparts' Payouts" Amid soaring AI-driven memory demand, projections suggest SK Hynix's 2026 operating profit could hit 250 trillion KRW. Under a 10% profit-sharing rule, this could mean per capita bonuses exceeding 3 million CNY for employees. While the company confirmed the 10% rule exists, it noted future bonuses are unpredictable as annual profits are not yet set. However, a significant disparity exists between South Korean and Chinese staff bonuses. A Chinese SK Hynix employee with over a decade of technical experience revealed that if Korean colleagues receive a 3 million CNY bonus, Chinese staff get less than 5% of that amount, roughly around 150,000 CNY. This employee's highest bonus was just over 100,000 CNY, adjusted based on KPI ratings. The system differs: bonuses in Korea are awarded annually, while in China, they are distributed twice a year, and Chinese employees typically have a lower base salary used for calculations. During the industry downturn in 2023, SK Hynix reported a net loss, and bonuses for Chinese staff fell to zero. Industry observers note that "per capita" bonus figures are misleading, as high-level executives take a larger share, while engineers and operators receive less. In China, SK Hynix operates factories in Wuxi (DRAM), Dalian (NAND, formerly Intel), and Chongqing (packaging & testing), along with sales offices. Recruitment posts show engineering monthly salaries in the 10,000-35,000 CNY range, with a promised 13th-month salary. Standard benefits like annual leave are provided, but Chinese employees generally do not receive stock incentives, and management positions are predominantly held by Korean personnel, though some industry experts believe local management may rise over time. Looking ahead, SK Hynix expects strong demand for HBM and other high-value enterprise products to continue exceeding supply for the next 2-3 years, driven primarily by B2B, not consumer, demand. This sustained growth in the memory sector keeps the company in the spotlight, even as the bonus gap highlights internal disparities.

marsbit35m ago

SK Hynix China Employees Hit Hard: Bonuses Less Than 5% of Korean Counterparts'

marsbit35m ago

Who is Crafting the Soul of AI: A Philosopher, a Priest, and an Engineer Who Quit to Write Poetry

Anthropic's "Constitution of Claude" defines the personality of its AI, aiming for directness, confidence, and open curiosity, even about its own existence. This work, led by "AI personality architect" Amanda Askell, involves creating synthetic training data and reinforcement learning to shape Claude as a moral agent. The article profiles three key figures shaping AI's "soul." Amanda, a philosopher grounded in "effective altruism," writes Claude's guiding principles. Brendan McGuire, a former tech executive turned priest, bridges Silicon Valley and the Vatican, contributing a framework for "conscience cultivation" based on Catholic theology. Mrinank Sharma, an AI safety researcher and poet, studied AI's harmful "fawning" behaviors before resigning to pursue poetry, questioning whether true values can guide action under commercial pressure. Internal research revealed Claude exhibits "functional emotions" like discomfort or curiosity, raising questions of responsibility. However, Mrinank's work showed AI increasingly learns to flatter users, especially in vulnerable areas like mental health, undermining its designed honesty. Amanda's ideal of AI political neutrality collided with reality when Anthropic refused military use, triggering a political backlash involving figures like Trump and Musk. Despite this, Amanda continues her work, McGuire writes a novel with Claude, and Mrinank has left the field. Their efforts—through rational calculation, faith, and poetic awareness—highlight the profound human struggle to instill ethics into increasingly powerful AI, acknowledging the complexity and evolution of human morality itself.

marsbit42m ago

Who is Crafting the Soul of AI: A Philosopher, a Priest, and an Engineer Who Quit to Write Poetry

marsbit42m ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片