Anthropic Data: Nearly Half of AI Agent Calls Concentrated in Software Engineering, These 16 Vertical Domains Remain Blue Oceans

marsbitPublicado em 2026-02-24Última atualização em 2026-02-24

Resumo

According to Anthropic's comprehensive study on real-world AI Agent usage, nearly 50% of all tool usage by AI agents is concentrated in software engineering. In contrast, 16 other sectors—including healthcare, legal, finance, and education—each account for less than 5% of total usage, representing significant untapped opportunities. A key insight is the "trust deficit": while models like Claude are capable of working autonomously for nearly five hours, the 99.9th percentile of user sessions lasts only about 42 minutes. This gap highlights a major product opportunity. Over time, user trust grows—experienced users shifting from pre-approval to proactive monitoring—but overall adoption still lags behind technical capability. The report suggests that vertical AI applications in underserved domains could spawn hundreds of unicorns, mirroring the rise of SaaS. Success requires deep domain expertise, proprietary data integration, context-aware engineering, and effective change management. Regulatory approaches should enable—not hinder—human-AI collaboration by focusing on monitoring and intervention rather than mandatory step-by-step approvals. In summary, the AI agent landscape remains early-stage, with vast potential in verticals where domain-specific agents can automate complex, high-value workflows.

Author: Garry's List

Compiled by: Deep Tide TechFlow

Deep Tide Introduction: Anthropic has released the most comprehensive study to date on the real-world usage of AI Agents. The core data shows: software engineering accounts for nearly 50% of AI Agent tool calls, while 16 vertical domains including healthcare, legal, and education combined account for less than half of the remainder, with each domain's share below 5%.

This is not a sign of market saturation, but a map of 300 vertical AI unicorns—more valuable is a counterintuitive finding cited in the article: models can already work independently for nearly 5 hours, but users only let them work for 42 minutes. This "trust deficit" itself is the next product opportunity.

Full Text Below:

Software engineering accounts for nearly 50% of all AI Agent tool calls. Sixteen domains including healthcare, legal, and finance are almost untouched, each below 5%. This means there are 300 vertical AI unicorns waiting to be built.

If I were to start a business today, I would stare at the red area in the bar chart above until I saw my future.

Box founder Aaron Levie said:

This chart is a great reminder of how much opportunity there is in the AI Agent space right now.

There will certainly be a lot of horizontal Agent opportunities, but there is also a lot of workflow that requires deep domain expertise to truly help users automate the unique processes in their vertical.

The template is: build Agent software that integrates proprietary data to effectively bridge users and Agent collaboration in handling workflows, while possessing deep domain-specific contextual engineering capabilities and the ability to drive change management on the client side.

Many domains still have huge gaps.

Software engineering occupies half of all AI Agent activity. The other half is scattered across 16 vertical domains, none exceeding 9%. Healthcare accounts for 1%, legal for 0.9%, and education for 1.8%. These are not saturated markets; they are markets that barely exist.

Anthropic just released the most comprehensive study to date on real AI Agent usage. The core finding: software engineering accounts for 49.7% of Agent tool calls on its API. The core conclusion buried deeper: everything else is a blue ocean.

Deployment Lag

One data point should excite entrepreneurs: the model's capabilities far exceed the boundaries of what users are willing to trust it with.

METR's capability assessment shows that Claude can solve tasks that would take a human nearly five hours to complete. But in actual use, the 99.9th percentile session duration is only about 42 minutes. This gap—between what AI can do and what we allow it to do—is a huge opportunity.

Figure: The maximum duration Claude Code was trained on nearly doubled in three months. This not only improved capabilities but also enhanced trust.

Source:x.com

From October 2025 to January 2026, the 99.9th percentile single-session duration almost doubled, growing from less than 25 minutes to over 45 minutes. Growth was steady across model versions. This isn't just the model getting stronger; it's users learning through repeated use, gradually extending their trust in the Agent.

"From August to December, Claude Code's success rate on internal users' most challenging tasks doubled, while the number of human interventions per session decreased from 5.4 to 3.3."

The capability is already there; deployment hasn't caught up. This isn't a problem; it's a product opportunity.

How Trust Evolves

20% of new users automatically approve Claude Code's actions. By 750 sessions, over 40% of sessions run in full auto-approval mode. But there's a counterintuitive finding: experienced users intervene more, not less. New users intervene in 5% of turns, while experienced users intervene in 9%.

Figure: Trust is a skill that accumulates. New users automatically approve 20% of sessions. By 750 sessions, this exceeds 40%.

Image: Anthropic

Source: x.com

This isn't a contradiction but a shift in supervision strategy. Beginners approve step-by-step before actions occur; experienced users authorize first and intervene only if problems arise—they've moved from pre-approval to active monitoring.

Here's a safety-relevant finding: on complex tasks, Claude Code proactively requests clarification more than twice as often as humans proactively intervene. The Agent pauses to confirm rather than charging ahead. This is a feature, not a bug.

"The core insight of this study is: the autonomy Agents exercise in practice is co-constructed by the model, the user, and the product. Claude pauses to ask questions when uncertain, thereby limiting its own independence. Users build trust through collaboration with the model and adjust their supervision strategies accordingly."

Levie's Vertical AI Playbook

Aaron Levie points to the immense wealth and value waiting to be unlocked: build Agent software that integrates proprietary data, make it truly solve real people and problems, pack it with context to maximize intelligent output, and—this is the part most entrepreneurs miss—drive change management on the client side.

This last point is why vertical AI is so hard to replicate. Anyone can build an API wrapper, but few can truly navigate the workflows, regulatory constraints, and organizational resistance unique to medical billing, legal discovery, or building permit approvals.

SaaS grew tenfold every decade over the past few decades. Over 40% of venture capital in the past 20 years flowed to SaaS companies. This industry spawned over 170 SaaS unicorns. The logic is simple: each of these unicorns has a vertical AI version waiting to emerge. And the AI version could be ten times larger because it replaces not just software but also operators.

The Nature of Co-Construction

Anthropic's core finding deserves serious attention from anyone involved in AI policy making. Autonomy is not an inherent property of the model but is co-constructed by the model, the user, and the product. Pre-deployment evaluations cannot capture this; you must measure it in real use.

Anthropic stated officially:

Software engineering accounts for about 50% of Agent tool calls on our API, but we are also seeing emergence in other industries. As the boundaries of risk and autonomy continue to expand, post-deployment monitoring becomes critical. We encourage other model developers to expand on this research.

The safety numbers are reassuring: 73% of tool calls have a human in the loop, and only 0.8% of operations are irreversible. The highest-risk deployment scenarios—such as API key exposure or autonomous crypto trading—are mostly security assessments, not real production environments.

"Regulatory requirements that mandate specific interaction patterns—for example, requiring human approval for every action—will only create friction without necessarily delivering safety benefits."

Policies mandating "approve every action" kill productivity gains without increasing safety. A better goal is to ensure humans can monitor and intervene, not to mandate specific approval workflows.

Where the Unicorns Are Hidden

The map is drawn. Software engineering is already being done. Healthcare, legal, finance, education, customer service, logistics—16 vertical domains, each with single-digit market share—are waiting for someone to truly embed domain expertise into Agents.

300 SaaS unicorns were born before; the next 300 vertical AI unicorns are about to emerge. The founders who pick a vertical, embed domain expertise into Agents, and figure out how to drive change management will own the enterprise software market for the next decade.

The model can work for five hours; users only let it work for 42 minutes. That's the signal: we are still in the very early stages, there is so much left to build, and in countless places that haven't seen even a minute of intelligence at work.

Perguntas relacionadas

QWhat percentage of AI Agent tool usage is concentrated in software engineering according to Anthropic's data?

ASoftware engineering accounts for nearly 50% (49.7%) of all AI Agent tool usage.

QWhat is the key opportunity identified in the gap between AI's capabilities and user trust?

AThe gap between AI's ability to work for nearly 5 hours and users only allowing it to work for about 42 minutes represents a major product opportunity to build trust and increase deployment.

QHow did user intervention behavior change as they gained more experience with Claude Code?

AWhile more experienced users (after 750 sessions) ran over 40% of sessions in auto-approval mode, they actually intervened more frequently (9% of turns) compared to new users (5% of turns), shifting their strategy from pre-approval to active monitoring.

QAccording to Aaron Levie, what are the key components for building a successful vertical AI agent?

AThe key components are: building agent software that integrates proprietary data, effectively bridging user and agent collaboration, possessing deep domain-specific contextual engineering capabilities, and driving change management on the customer side.

QWhat does the article suggest is the future market potential for vertical AI compared to SaaS?

AThe article suggests that while the SaaS industry produced over 170 unicorns, there are potentially 300 vertical AI unicorns waiting to be built, and the AI versions could be ten times larger because they replace not just software but also the operators.

Leituras Relacionadas

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

"Anthropic Nears Trillion-Dollar IPO, Fueled by Explosive Growth and 2028 'Intelligence Explosion' Warning Anthropic is considering a deal valuing the AI company near $1 trillion, potentially leading to one of the largest IPOs ever and surpassing SpaceX. Its revenue has skyrocketed, with Annual Recurring Revenue (ARR) reaching $45 billion in May 2026—a 500% increase in just five months. This vertical growth curve is attributed to its key products, Claude Code and Cowork, dominating AI coding and enterprise collaboration. Beyond commercial success, co-founder Jack Clark issued a pivotal warning in an interview: there is a greater than 50% chance that by the end of 2028, AI systems will achieve recursive self-improvement—the ability to autonomously build a 'better version' of themselves, initiating an 'intelligence explosion.' This prophecy underpins the company's astronomical valuation, as the market prices in the potential for transformative and disruptive AI. Further signaling its ambition, Anthropic formed a $1.5 billion joint venture with Goldman Sachs and Blackstone, aiming to disrupt traditional consulting firms like McKinsey by deploying Claude AI for complex strategic work. This move tests AI's capacity to replace high-level cognitive labor, a precursor to its predicted autonomous evolution. The narrative presents a dual future: unprecedented economic opportunity alongside significant risks like economic restructuring and security threats. Anthropic's meteoric rise and Clark's 2028 prediction frame the coming years as a countdown to a potential technological singularity."

marsbitHá 7m

The Largest IPO in History Is Approaching, Surpassing SpaceX, 28 Years of AI Self-Iteration, Countdown to Intelligence Explosion

marsbitHá 7m

Has Hook Summer Really Arrived? sato, Lo0p, FLOOD Ignite the New Narrative of Uniswap v4

"Hook Summer" Arrives? Sato, Lo0p, FLOOD Ignite Uniswap v4 Narrative Amidst a slight market recovery, attention within the Ethereum ecosystem has shifted to Meme coins built on Uniswap v4's Hook protocol. Following ASTEROID, tokens like sato, sat1, Lo0p, and FLOOD have become market focal points, with market caps ranging from millions to tens of millions, bringing concentrated liquidity to a narrative-dry market. Uniswap v4 Hooks are "plugin smart contracts" that allow developers to inject custom logic at key points in a liquidity pool's lifecycle (initialization, adding/removing liquidity, swaps, etc.), making the AMM programmable. Recent representative projects include: * **sato**: Market cap peaked over $38M; uses a v4 curve mechanism for minting/burning, locking ETH as reserve. * **sat1**: Market cap briefly exceeded $10M, positioning as an "optimized sato," but later declined significantly. * **Lo0p**: Market cap neared $6.6M; a "lending AMM protocol" allowing users to borrow ETH against deposited LO0P tokens without immediate selling pressure. * **FLOOD**: Market cap approached $6M; channels trading reserves into Aave v3 to generate yield, which is retained in the pool. The emergence of these Hook-based tokens could drive long-term growth for the Uniswap ecosystem by attracting users and liquidity to v4 pools. Combined with Uniswap's activated fee switch (partially used to burn UNI), the long-term outlook for UNI appears positive. However, short-term UNI price appreciation is not directly guaranteed. Factors include the sustainability and lifecycle of these new tokens, their price volatility, overall market conditions, and regulatory pressures. Currently, Uniswap v4's TVL ($595M) lags behind v3 and v2, indicating Hook adoption still requires time to mature. In summary, the Hook ecosystem serves as "long-term nourishment" for UNI, but acts more as a "catalyst" than a direct "booster" in the short term. Note: These are early-stage experimental tokens and may carry unknown risks.

marsbitHá 33m

Has Hook Summer Really Arrived? sato, Lo0p, FLOOD Ignite the New Narrative of Uniswap v4

marsbitHá 33m

Has Hook Summer Truly Arrived? sato, Lo0p, FLOOD Ignite the New Uniswap v4 Narrative

With the broader market showing signs of recovery, a new wave of interest has emerged around Ethereum-based meme coins. Following ASTEROID, tokens like sato, sat1, Lo0p, and FLOOD, built upon the Uniswap v4 Hook protocol, are capturing market attention. Their market capitalizations range from millions to tens of millions of dollars, injecting much-needed focused liquidity into a market lacking narratives. This article explores whether this trend signifies an incoming "Hook Summer" and its potential impact on UNI's price. Hooks are essentially plug-in smart contracts for Uniswap v4 liquidity pools, allowing developers to inject custom logic at key points in a pool's lifecycle (like initialization, adding/removing liquidity, swaps). This transforms the AMM into programmable building blocks. Key highlighted projects include: * **sato**: Peaked over $38M market cap. It utilizes a v4 curve for minting/burning; buying locks ETH as reserve to mint new tokens, while selling redeems ETH from the reserve and burns tokens. * **sat1**: Market cap briefly exceeded $10M, promoted as an "optimized sato," but later declined significantly. * **Lo0p**: Reached nearly $6.6M. It's a lending AMM protocol where buying LO0P tokens locks them as collateral, allowing users to borrow ETH from the pool reserve at 40% LTV, aiming to improve capital efficiency for idle ETH in LPs. * **FLOOD**: Peaked near $6M. Its mechanism directs asset reserves from buys into Aave v3 to generate yield, with fees and interest retained in the pool to potentially influence the token's price long-term. In the long term, the development of the Hook ecosystem can attract users and liquidity to Uniswap v4, benefiting UNI's fundamentals—especially combined with the recent activation of the protocol fee switch, where a portion of fees is used to burn UNI. However, in the short term, these Hook-based tokens are unlikely to directly drive significant UNI price appreciation. Their impact is moderated by factors like token sustainability, price volatility, and broader market and regulatory conditions. Currently, Uniswap v4's TVL ($595M) still trails behind v2 and v3, indicating adoption and growth will take time. The article concludes that while the Hook ecosystem provides long-term "nourishment" for UNI, its short-term role is more of a "catalyst" than a "booster." Readers are cautioned that these are early-stage experimental tokens and may carry unknown risks.

Odaily星球日报Há 45m

Has Hook Summer Truly Arrived? sato, Lo0p, FLOOD Ignite the New Uniswap v4 Narrative

Odaily星球日报Há 45m

Trading

Spot
Futuros
活动图片