Your AI Agent is Quietly Changing the Rules of the Internet

比推Published on 2026-03-05Last updated on 2026-03-05

Abstract

AI Agents are rapidly transforming the internet landscape, evolving from experimental tools to essential components in daily operations—managing emails, scheduling meetings, and handling support tickets. By 2025, automated traffic is projected to surpass human activity, accounting for 51% of all web traffic, with AI-driven visits to US retail sites surging by 4,700% year-over-year. However, confidence in fully autonomous agents has declined due to security concerns, as infrastructure struggles to keep pace with their expansion. Key challenges include discoverability—agents must efficiently find machine-readable services amidst web pages designed for humans, prompting a shift from SEO to Agent-Oriented Discoverability (AEO). Identity is critical: agents require cryptographic authentication, delegated authority, and real-world accountability to transact securely, leading to emerging standards like ERC-8004 and protocols such as Visa’s Trusted Agent Protocol. Finally, reputation systems are essential to verify agent performance through methods like trusted execution environments (TEEs), zero-knowledge machine learning (ZKML), and economic security models, enabling portable, auditable records of reliability. Together, discoverability, identity, and reputation form the foundational infrastructure for an agent-driven economy, ensuring agents can operate at scale with trust and autonomy.

Author: Vaidik Mandloi

Original Title: Know Your Agent

Compiled and Arranged by: BitpushNews


The promise that AI Agents will change the internet landscape is gradually becoming a reality. They have moved beyond being experimental tools in chat windows to become an indispensable part of our daily operations—from cleaning up inboxes and scheduling meetings to responding to support tickets. They silently enhance productivity, a change often overlooked.

However, this growth is not merely anecdotal.

By 2025, automated traffic has surpassed human traffic, accounting for 51% of total web activity. AI-driven traffic on US retail websites alone has increased by 4,700% year-over-year. AI agents are now operating across internal systems, many with the ability to access data, trigger workflows, and even initiate transactions.

However, confidence in fully autonomous agents has dropped from 43% to 22% within a year, largely due to rising security incidents. Nearly half of enterprises still use shared API keys to authenticate agents, a method never designed for autonomous systems to move value or act independently.

The problem is: agents are scaling faster than the infrastructure designed to govern them.

In response, entirely new protocol stacks are emerging. Stablecoins, card network integrations, and agent-native standards like x402 are enabling machine-initiated transactions. Simultaneously, new identity and verification layers are being developed to help agents identify themselves and operate within structured environments.

But enabling payments is not equivalent to enabling an economy. Because once agents can move value, more fundamental questions arise: How do they discover suitable services in a machine-readable way? How do they prove identity and authorization? How do we verify that the actions they claim to perform actually happened?

This article will explore the infrastructure needed for an agent-driven economy to operate at scale and assess whether these layers are mature enough to support persistent, autonomous participants operating at machine speed.

Agents Cannot Buy What They Cannot See

Before an agent can pay for a service, it must first find that service. This sounds simple but is currently the area of greatest friction.

The internet was built for humans to read pages. When humans search for content, search engines return ranked links. These pages are optimized for persuasion. They are filled with layouts, trackers, ads, navigation bars, and stylistic elements that make sense to humans but are mostly "noise" to machines.

When an agent requests the same page, it receives raw HTML. A typical blog post or product page in this form might require about 16,000 tokens. When converted to a clean Markdown file, the token count drops to about 3,000. This means the model must process 80% less content. For a single request, this difference might be negligible. But when an agent makes thousands of such requests across multiple services, excessive processing compounds into latency, cost, and higher inference complexity.

@Cloudflare

Agents end up spending significant computational effort stripping away interface elements before they can access the core information needed to take action. This effort does not improve output quality; it merely compensates for a web never designed for them.

As agent-driven traffic grows, this inefficiency becomes more apparent. AI-driven crawling of retail and software websites has increased significantly over the past year and now constitutes the majority of total web activity. Meanwhile, about 79% of major news and content websites block at least one AI crawler. From their perspective, this reaction is understandable. Agents extract content without interacting with ads, subscriptions, or traditional conversion funnels. Blocking them is to protect revenue.

The problem is that the web has no reliable way to distinguish between malicious scrapers and legitimate procurement agents. Both appear as automated traffic, both originate from cloud infrastructure. To the system, they look identical.

The deeper issue is that agents are not trying to "consume" pages; they are trying to discover possibilities for action.

When a human searches for "flights under $500," a list of ranked links is sufficient. A person can compare options and make a decision. When an agent receives the same instruction, it needs something completely different. It needs to know which services accept booking requests, what input format is required, how prices are calculated, and whether payment can be settled programmatically. Very few services clearly publish this information.

@TowardsAI

This is why the conversation is shifting from Search Engine Optimization (SEO) to Agent-Oriented Discoverability, often called AEO. If the end-user is an agent, ranking on a search page becomes less important. What matters is whether a service can describe its capabilities in a way that an agent can interpret without guessing. If not, it risks becoming "invisible" in a growing share of economic activity.

Agents Need Identity

@Hackernoon

Once an agent can discover services and initiate transactions, the next major problem is letting the system on the other end know who it is dealing with. In other words: identity.

Today's financial systems run on far more machine identities than human ones. In finance, the ratio of non-human to human identities is approximately 96 to 1. APIs, service accounts, automated scripts, and internal agents dominate institutional infrastructure. Most of them were never designed to have discretion over capital. They execute predefined instructions; they cannot negotiate, choose vendors, or initiate payments on open networks.

Autonomous agents change this boundary. If an agent can directly move stablecoins or trigger a checkout process without manual confirmation, the core question shifts from "Can it pay?" to "Who authorized it to pay?"

This is where identity becomes fundamental, giving rise to the concept of "Know Your Agent" (KYA).

Just as financial institutions verify clients before allowing them to transact, services interacting with autonomous agents must verify three things before granting access to capital or sensitive operations:

  1. Cryptographic Authenticity: Does this agent actually control the keys it claims to use?

  2. Delegated Authority: Who granted this agent permission, and what are its limits?

  3. Real-World Affiliation: Is this agent linked to a legally accountable entity?

These checks together form the identity stack:

  • The base layer is cryptographic key generation and signing. Standards like ERC-8004 attempt to formalize how agents can anchor identity in a verifiable on-chain registry.

  • The middle layer is the identity provider layer. This binds keys to real-world entities like registered companies, financial institutions, or verified individuals. Without this binding, a signature only proves control, not accountability.

  • The edge layer is the verification infrastructure. Payment processors, CDNs, or application servers verify signatures in real-time, check associated credentials, and enforce permission boundaries. Visa's Trusted Agent Protocol is an example for permitted commerce, allowing merchants to verify an agent is authorized to transact on behalf of a specific user. Stripe's Agent Commerce Protocol (ACP) is pushing similar checks into programmable checkout and stablecoin flows.

Meanwhile, the Universal Commerce Protocol (UCP), led by Google and Shopify, allows merchants to publish "capability manifests" that agents can discover and negotiate with. It acts as an orchestration layer and is expected to integrate with Google Search and Gemini.

@FintechBrainfood

An important nuance is that permissionless and permitted systems will coexist.

On public blockchains, agents can transact without centralized gatekeepers. This increases speed and composability but also intensifies compliance pressure. Stripe's acquisition of Bridge highlights this tension. Stablecoins enable instant cross-border transfers, but compliance obligations don't disappear just because settlement happens on-chain.

This tension inevitably draws regulators in. Once autonomous agents can initiate financial transactions and interact with markets without direct human supervision, questions of accountability become unavoidable. The financial system cannot allow capital to flow through unidentified or unauthorized actors, even if those actors are pieces of software.

Regulatory frameworks are already being adopted. The Colorado AI Act, effective February 1, 2026, introduces accountability requirements for high-risk automated systems, with similar legislation advancing globally. As agents begin executing financial decisions at scale, identity will cease to be optional. If discoverability makes agents visible, identity is the credential that makes them recognized.

Verifying Agent Execution and Reputation

Once agents start performing tasks involving money, contracts, or sensitive information, merely having an identity might not be enough. A verified agent can still hallucinate, misrepresent its work, leak information, or underperform.

Thus, the most critical question becomes: Can it be proven that the agent actually did the work it claims?

If an agent states it analyzed 1,000 documents, detected fraud patterns, or executed a trading strategy, there must be a way to verify that this computation indeed occurred and that the output was not forged or corrupted. For this, we need a performance layer to enable this.

Currently, there are three approaches to achieve this:

  1. TEEs (Trusted Execution Environments): The first approach relies on attestation through hardware like AWS Nitro and Intel SGX. In this model, the agent runs inside a secure enclave that issues cryptographic certificates confirming specific code executed on specific data and was not tampered with. The overhead is usually small (around 5-10% additional latency), acceptable for financial and enterprise-grade use cases where integrity trumps speed.

  2. ZKML (Zero-Knowledge Machine Learning): The second approach is mathematical. ZKML enables agents to generate cryptographic proofs that an output was produced by a specific model without revealing the model weights or private inputs. Lagrange Labs' DeepProve-1 recently demonstrated full zero-knowledge proofs for GPT-2 inference, 54-158 times faster than previous methods.

  3. Restake Security: The third model enforces correctness through economic means rather than computational ones. Protocols like EigenLayer introduce staking-based security, where verifiers stake capital behind an agent's output. If the output is challenged and proven false, the stake is slashed. The system doesn't prove every computation but makes dishonesty economically irrational.

These mechanisms address the same problem from different angles. However, execution proofs are episodic. They verify a single task, but the market needs something cumulative. This is where reputation becomes critical.

Reputation turns isolated proofs into a long-term performance history. Emerging systems aim to make agent performance portable and cryptographically anchored, rather than relying on platform-specific ratings or opaque internal dashboards.

The Ethereum Attestation Service (EAS) allows users or services to issue signed, on-chain attestations about an agent's behavior. A successful task completion, an accurate prediction, or a compliant transaction can be recorded in a tamper-resistant way and travel with the agent across applications.

@EAS

Competitive benchmarking environments are also forming. Agent Arenas evaluate agents based on standardized tasks and rank them using scoring systems like Elo. Recall Network reported over 110,000 participants generated 5.88 million predictions, creating measurable performance data. As these systems scale, they begin to resemble real rating markets for AI agents.

This allows reputation to be carried across platforms.

In traditional finance, agencies like Moody's rate bonds to signal creditworthiness. The agent economy will need an equivalent layer to rate non-human actors. The market will need to assess whether an agent is reliable enough to delegate capital to, whether its outputs are statistically consistent, and whether its behavior remains stable over time.

Conclusion

As agents begin to wield real authority, the market will need a clear way to measure their reliability. Agents will carry portable performance records based on verified execution and benchmarking, with scores adjusting for quality decay and permissions traceable to clear authorization. Insurers, merchants, and compliance systems will rely on this data to decide which agents can access capital, data, or regulated workflows.

In summary, these layers begin to constitute the infrastructure of the agent economy:

  1. Discoverability: Agents must be able to discover services in a machine-readable way, or they cannot find opportunities.

  2. Identity: Agents must prove who they are and who authorized them, or they cannot enter the system.

  3. Reputation: Agents must establish a verifiable record proving they are trustworthy, thereby earning ongoing economic trust.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group:https://t.me/BitPushCommunity

Bitpush TG Subscription: https://t.me/bitpush

Original Link:https://www.bitpush.news/articles/7617176

Related Questions

QWhat is the projected percentage of automated traffic on the internet by 2025, and what does this indicate about the role of AI agents?

ABy 2025, automated traffic is projected to exceed human traffic, accounting for 51% of total internet activity. This indicates that AI agents are becoming a dominant force, moving beyond experimental tools to essential components in daily operations, silently enhancing productivity in areas like inbox management, meeting scheduling, and support ticket responses.

QWhat are the three core components of the 'identity stack' required for AI agents to operate securely in economic transactions?

AThe three core components of the identity stack are: 1) Cryptographic authenticity (verifying the agent controls the keys it claims to use), 2) Delegated authority (identifying who granted the agent permission and its limits), and 3) Real-world linkage (connecting the agent to a legally accountable entity).

QWhy is 'Agent-Oriented Discoverability' (AEO) becoming more important than traditional SEO for services interacting with AI agents?

AAgent-Oriented Discoverability (AEO) is becoming more critical than SEO because AI agents need machine-readable descriptions of services' capabilities, input formats, pricing, and programmable payment options—not human-optimized web pages with layouts and ads. Without AEO, services risk becoming 'invisible' to agents, missing out on economic opportunities as automated traffic grows.

QWhat are the three methods mentioned for verifying an AI agent's execution and performance to ensure trustworthiness?

AThe three methods for verifying AI agent execution are: 1) Trusted Execution Environments (TEEs) using hardware like AWS Nitro for encrypted certification, 2) Zero-Knowledge Machine Learning (ZKML) for cryptographic proofs of model output without revealing data, and 3) Restake Security, which uses economic incentives (e.g., staking capital) to penalize dishonest behavior.

QHow does reputation infrastructure for AI agents, such as Ethereum Attestation Service (EAS), contribute to the agent economy?

AReputation infrastructure like Ethereum Attestation Service (EAS) allows signed, on-chain attestations of agent behavior (e.g., successful tasks or accurate predictions), creating a portable, tamper-proof performance history. This enables portable reputations across platforms, helping markets assess reliability for granting access to capital, data, or regulated workflows, similar to credit ratings in traditional finance.

Related Reads

South Korean Exchanges 'Battle' Regulators, Challenging the Boundaries of Enforcement and Legislation

South Korea's cryptocurrency industry is engaged in a rare, direct confrontation with regulators. The Financial Intelligence Unit (FIU), the primary anti-money laundering (AML) watchdog, has recently imposed heavy penalties on major exchanges like Upbit and Bithumb for alleged violations involving unregistered overseas VASPs and AML procedures. However, exchanges are now actively challenging these actions in court and through industry associations. In a significant shift, the Seoul Administrative Court ruled in favor of Upbit's operator, Dunamu, overturning part of an FIU-ordered business suspension. The court found the FIU's penalty criteria and justification insufficiently clear. Similarly, the court suspended the enforcement of a six-month business suspension against Bithumb pending a final ruling, citing potential irreversible harm to the exchange. Beyond legal battles, the industry is contesting proposed legislative amendments. The Digital Asset eXchange Alliance (DAXA) strongly opposes a draft rule that would mandate Suspicious Transaction Reports (STRs) for all crypto transfers over 10 million KRW (~$6,800). DAXA argues this "poison pill" clause violates legal principles and would overwhelm the STR system, increasing reports from 63,000 to an estimated 5.45 million annually for major exchanges, thereby crippling effective AML monitoring. This conflict highlights a structural tension in South Korea's crypto governance: comprehensive digital asset laws are still developing, while regulators rely heavily on AML enforcement. The industry's move from passive compliance to active legal and legislative challenges signifies a new phase, pressing for clearer rules and more proportionate enforcement. While short-term disputes may intensify, this clash could ultimately lead to a more mature and sustainable regulatory framework for South Korea's vibrant crypto market.

marsbit16m ago

South Korean Exchanges 'Battle' Regulators, Challenging the Boundaries of Enforcement and Legislation

marsbit16m ago

After 50x Storage Surge, Justin Sun Always Looks to the Next Decade

Sun Yuchen, known for his controversial stunts like a $30 million lunch with Warren Buffett (canceled due to a kidney stone) and eating a $6.2 million duct-taped banana, is often overshadowed by a significant fact: his decade-long track record of spotting major investment trends. In 2016, he famously advised young people to invest in Bitcoin, Nvidia, Tesla, and Tencent instead of buying property. A hypothetical $20,000 investment in Nvidia and Tesla from that list would now be worth over 50 million RMB. His latest major call was on November 6, 2025, predicting a "50x storage opportunity" tied to the AI boom, which materialized with Sandisk's stock surging nearly 50-fold by 2026. Looking ahead, Sun now focuses on the next frontier: Physical AI. He identifies four key areas: 1. **Embodied AI/Robotics**: He sees this reaching its "iPhone moment," with companies like UBTech and Galaxy General leading in commercialization. 2. **Drones**: Viewed as the first commercially viable form of Physical AI, revolutionizing sectors from warfare (e.g., AeroVironment's Switchblade) to logistics. 3. **Spatial Computing**: Beyond VR, it's about AI understanding physical space, a foundational technology for robotics and autonomous systems, exemplified by Apple's Vision Pro. 4. **Space Exploration**: After a 2025 suborbital flight with Blue Origin, Sun advocates for space as the ultimate frontier, discussing blockchain's potential role in space asset management and data transactions. His investment philosophy involves betting on entire, inevitable trends rather than single companies. For robotics, he sees Tesla (the body/manufacturer) and Nvidia (the brain/AI platform) as complementary plays. In defense drones, he highlights companies making tanks obsolete (AeroVironment) and those augmenting fighter jets (Kratos). For space, he participated in Blue Origin's flight and anticipates SpaceX's potential IPO to redefine the sector's valuation. Sun Yuchen's vision frames the next two decades not as a revolution in information flow (like the internet), but in the fundamental operation of the physical world through AI-powered robots, autonomous systems, and spatial intelligence, ultimately extending human and AI activity into space. While many still focus on conventional assets, he continues to look toward the next technological horizon.

marsbit1h ago

After 50x Storage Surge, Justin Sun Always Looks to the Next Decade

marsbit1h ago

The Billionaires Behind the Most Expensive Midterm Election in History

"The Most Expensive Midterm Elections and Their Billionaire Backers" This analysis details the unprecedented scale of spending in the 2026 midterm elections, highlighting the key billionaire donors shaping the political landscape. Jeff Yass, founder of Susquehanna International Group, has contributed over $81 million, ranking third among individual donors behind George Soros ($102.6M) and Elon Musk ($84.8M). Yass is a major donor to Trump's MAGA Inc. and supports school choice and various candidates. Overall, federal committees have raised over $4.7 billion this cycle, with political ad spending projected to reach $10.8 billion. Republican-aligned groups are significantly out-raising their Democratic counterparts. "Dark money" from undisclosed sources continues to grow. The core stakes involve control of Congress and policy direction for Trump's final term. Donors are also motivated by specific issues: Sergey Brin and Chris Larsen are funding opposition to a proposed California wealth tax and supporting crypto-friendly policies. Other top donors include OpenAI's Greg Brockman and his wife Anna ($50M total to MAGA Inc. and an AI-focused PAC), Richard Uihlein ($45.3M to conservative causes), venture capitalists Marc Andreessen and Ben Horowitz (each over $44M to crypto/AI PACs and MAGA Inc.), Miriam Adelson ($42.6M to GOP leadership PACs), Paul Singer ($33.9M), and Diane Hendricks ($25.8M to MAGA Inc.). The article notes that the peak fundraising period is still ahead, with major primaries approaching.

marsbit1h ago

The Billionaires Behind the Most Expensive Midterm Election in History

marsbit1h ago

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

"Anthropic Nears Trillion-Dollar IPO, Fueled by Explosive Growth and 2028 'Intelligence Explosion' Warning Anthropic is considering a deal valuing the AI company near $1 trillion, potentially leading to one of the largest IPOs ever and surpassing SpaceX. Its revenue has skyrocketed, with Annual Recurring Revenue (ARR) reaching $45 billion in May 2026—a 500% increase in just five months. This vertical growth curve is attributed to its key products, Claude Code and Cowork, dominating AI coding and enterprise collaboration. Beyond commercial success, co-founder Jack Clark issued a pivotal warning in an interview: there is a greater than 50% chance that by the end of 2028, AI systems will achieve recursive self-improvement—the ability to autonomously build a 'better version' of themselves, initiating an 'intelligence explosion.' This prophecy underpins the company's astronomical valuation, as the market prices in the potential for transformative and disruptive AI. Further signaling its ambition, Anthropic formed a $1.5 billion joint venture with Goldman Sachs and Blackstone, aiming to disrupt traditional consulting firms like McKinsey by deploying Claude AI for complex strategic work. This move tests AI's capacity to replace high-level cognitive labor, a precursor to its predicted autonomous evolution. The narrative presents a dual future: unprecedented economic opportunity alongside significant risks like economic restructuring and security threats. Anthropic's meteoric rise and Clark's 2028 prediction frame the coming years as a countdown to a potential technological singularity."

marsbit1h ago

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

marsbit1h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片