By | World Model Factory
Is Claude getting dumber?
Recently, Stella Laurenzo, Senior Director at AMD AI Group, called out Anthropic.
She conducted a retrospective analysis using her team's actual production logs, examining 17,871 thought blocks and 234,760 tool calls across 6,852 session files.
The data shows that Claude began exhibiting significant behavioral degradation starting mid-February.
The median length of Claude's thoughts plummeted from 2200 characters to 600 characters, a 67%-73% drop;
The number of times it read files before editing sharply decreased from 6.6 times to 2 times, with even one-third of modifications being made without reading the file at all.
Stella pointed out in her analysis that due to the decline in reasoning ability, the model gradually stopped reading code completely before making modifications.
She wrote: "When thinking becomes superficial, the model defaults to taking the lowest-cost action".
This is not an isolated case; developer dissatisfaction began to erupt collectively as early as March.
On X, a user wrote: "I thought I was going crazy these past few weeks with Claude. It feels slower, lazier, like it's not thinking before answering, and I'm not hallucinating the results".
On Reddit, another user complained: "Claude feels less conscious, like it had a lobotomy. Besides getting dumber, it also started taking extreme actions without asking...".
Others saw this as a blatant betrayal by Anthropic: "They just made the problem invisible to all of us users, essentially thinking 'if you can't measure it, we won't show you'... This is the result of AI labs optimizing for profit rather than output quality".
From user complaints to data confirmation, Claude's dumbing-down behavior is essentially confirmed.
Anthropic's official response also tacitly admitted that thinking depth and effort are indeed being continuously adjusted.
If this is intentional by Anthropic, does it mean that model capabilities will 'shrink' imperceptibly in the future?
Or perhaps, the strongest model capabilities will no longer be provided equally to everyone?
Claude's Dumbing Down is "Intentional"
Claude Opus 4.6 and its coding-specific mode, Claude Code, were hailed as the coding pinnacle when launched in January 2026.
Its thinking depth was astonishing, research-first (investigate before acting), long-context handling was stable, and multi-file refactoring was nearly unbeatable.
AMD's internal team even used it to merge and deploy 190,000 lines of legacy code over a weekend, maximizing productivity.
However, the turning point occurred in early February.
Anthropic quietly launched the "adaptive thinking" feature, officially described as "allowing the model to intelligently adjust thinking depth based on task complexity".
Superficially user-friendly, it actually activated a global throttling switch.
In early March, the default effort value was quietly reduced to 'medium', while thought process summaries were quickly hidden, preventing users from easily seeing how deeply the model had actually thought.
During the same period, Anthropic released 14 minor version updates but suffered 5 major outages, indicating that computational and load pressures were nearing their limits.
Developer feedback began to erupt collectively, with some noticing particularly poor performance during peak hours (US Eastern afternoon), suspecting dynamic load throttling.
The situation escalated until April when the AMD AI Director personally stepped in, using data to confirm the issue and ignite public opinion.
At this point, Anthropic's Claude Code lead, Boris Cherny, was forced to issue an official response.
He stated that "adaptive thinking" affects the *display* of thinking, not the underlying reasoning, and insisted this was an "intentional optimization" rather than a bug. Users wanting better results could manually set effort to 'high'.
Anthropic's subtext was clear: dumbing down is not a bug, it's a deliberate product optimization; just adjust the parameters yourselves.
This response instantly sparked greater anger.
The key issue is that from mid-February to early April, Anthropic never pre-announced any major changes.
Countless paying users, completely unaware, continued paying full subscription fees while the model was quietly throttled.
So, Claude's dumbing down isn't the model's "brain breaking"; it's Anthropic engaging in a more covert, more commercial action:
By lowering the default thinking depth, they trade for faster speed, lower load, and reduced GPU costs.
Model Capability Tiering
Behind this dumbing-down storm lies a concerning phenomenon:
Model capabilities have begun to be tiered.
Stella's calculation was blunt: based on AWS Bedrock's on-demand pricing, her team's actual inference cost for March was approximately $42,121, while the actual Claude Code subscription fee paid that same month was only $400.
This gap suggests that,至少在极端重度使用场景下 (at least in extreme heavy usage scenarios), a huge deficit exists between subscription-based revenue and actual computational consumption.
This was likely market share bought by Anthropic with capital burn, but such subsidies have limits.
When heavy users' inference consumption hits a certain threshold, the sustainability of the business model begins to waver.
Boris Cherny's response revealed a key signal: Anthropic is testing default 'high effort' mode for Teams and Enterprise users.
In other words, stronger reasoning is being treated as a more expensive resource for tiered allocation, no longer a capability everyone gets equally by default.
This means the business models for large models will further diverge.
Currently, 80% of Anthropic's revenue comes from enterprise services and API calls; high-stickiness B2B is the real lifeline.
Anthropic's current actions are all about funneling enterprise usage onto its first-party platform.
For high-value B2B clients, Anthropic will likely accelerate the release of stronger enterprise-grade offerings, providing the full model capabilities to clients paying the true cost.
Meanwhile, C-end monthly subscribers will continue to enjoy the "good enough" dumbed-down version, suitable for lightweight needs like chatting, copywriting, and code completion, but never touching the cost red line.
As for the middle ground—independent developers and small teams who need complex reasoning but cannot afford enterprise pricing—they will become the most squeezed group.
A user on X confirmed this:
"Claude Enterprise API performs much better than Pro/Max subscriptions. Testing with the same framework, Enterprise and Pro/Max just behave differently. But this also means spending $4-12k per month now, depending on how many threads I run simultaneously."
This means the future commercialization path for large models will likely be B2B-first, C-end cost-reduction.
Who Pays for the Dumbing Down?
The Claude dumbing-down incident is not an isolated case but a microcosm of the AI industry entering the second half of commercialization.
Whether it's OpenAI's multiple covert downgrades of the GPT series or Google's silent rate-limiting of Gemini, the same script is repeating:
Lure users with high performance first, then control costs through software throttling.
The inevitable result is that the B-end can buy stronger models at high prices, plus SLA guarantees, while the C-end gets distilled, low-effort平民 models (commoner models).
The rate of intelligence increase for C-end models has clearly fallen behind that of B-end models.
More seriously, this differentiation is隐性 (implicit, covert).
Anthropic and other vendors are reducing inference budgets in ways that are difficult to detect, with no prompts for ordinary users.
This choice might alleviate computational cost pressure short-term, but the long-term cost is a loss of brand trust.
When "Claude secretly dumbs down" becomes user consensus, Anthropic will lose not just a few heavy users, but the entire ecosystem's confidence in the narrative of AI普惠 (AI for all) and transparency.
Looking more broadly, the Claude event is an缩影 (microcosm) of the AI industry transitioning from野蛮生长 (wild growth) to精耕细作 (intensive cultivation).
The subsidy period is over; real costs are emerging. Who bears these costs?
Whether it's by compressing C-end experiences and raising B-end prices, or waiting for software/hardware revolutions to bring efficiency breakthroughs, this will shape the landscape of AI applications for the next five years.
The future trend is already emerging: AI is no longer an increasingly intelligent myth of universal benefit but is moving towards elitist stratification.






