AI Agent Outputs Garbage? The Problem Is You're Not Willing to Burn Enough Tokens

marsbit2026-03-23 tarihinde yayınlandı2026-03-23 tarihinde güncellendi

Özet

The core argument is that the quality of an AI Agent's output is directly proportional to the number of tokens invested in the process. More tokens lead to fewer errors, as they allow for deeper reasoning, multiple independent attempts, self-critique from fresh contexts, and verification through testing. This approach can solve problems of scale and complexity but fails when facing novel problems not present in the model's training data. For such novel challenges, human domain knowledge and guidance are essential. Two practical, immediate solutions are proposed: implementing an automatic review cycle (WAIT) for the Agent to repeatedly critique and fix its work, and establishing frequent verification checkpoints (VERIFY) where a separate Agent validates outputs to catch errors early. The key takeaway is that insufficient token investment is often the primary reason for poor Agent performance, not the underlying framework.

Author: Systematic Long Short

Compiled by: Deep Tide TechFlow

Deep Tide Intro: The core argument of this article is just one sentence: The quality of an AI Agent's output is directly proportional to the number of Tokens you invest.

The author isn't speaking in general theoretical terms; instead, they provide two specific methods you can start using today and clearly define the boundary where throwing more Tokens won't help—the "novelty problem."

For readers currently using Agents to write code or run workflows, the information density and practicality are very high.

Introduction

Alright, you have to admit the title is quite eye-catching—but seriously, it's no joke.

In 2023, when we were using LLMs to run production code, people around us were stunned because the common belief at the time was that LLMs could only produce unusable garbage. But we knew something others didn't: the output quality of an Agent is a function of the number of Tokens you invest. It's that simple.

Run a few experiments yourself and you'll see. Have an Agent complete a complex, somewhat niche programming task—for example, implementing a convex optimization algorithm with constraints from scratch. First, run it at the lowest thinking level; then switch to the highest thinking level and have it review its own code to see how many bugs it can find. Try the medium and high levels. You'll visually see: the number of bugs decreases monotonically as the number of Tokens invested increases.

This isn't hard to understand, right?

More Tokens = Fewer errors. You can take this logic a step further; this is essentially the (simplified) core idea behind code review products. In a completely new context, invest a massive number of Tokens (for example, have it parse the code line by line, judging whether each line has a bug)—this can basically catch the vast majority, if not all, bugs. This process can be repeated ten times, a hundred times, each time examining the codebase from a "different angle," and you can eventually unearth all the bugs.

There's another empirical support for the view that "burning more Tokens improves Agent quality": those teams that claim to use Agents to write code from start to finish and push it directly to production are either the foundational model providers themselves or extremely well-funded companies.

So, if you're still struggling to get production-level code from your Agent—to be blunt, the problem lies with you. Or rather, with your wallet.

How to Tell If You're Burning Enough Tokens

I wrote an entire article saying the problem definitely isn't your framework (harness), that "keeping it simple" can still produce excellent results, and I still stand by that view. You read that article, followed the advice, but were still greatly disappointed by the Agent's output. You sent me a DM, saw I read it but didn't reply.

This article is the reply.

Your Agent performs poorly and can't solve the problem, most of the time, simply because you're not burning enough Tokens.

How many Tokens are needed to solve a problem depends entirely on the problem's scale, complexity, and novelty.

"What's 2+2?" doesn't require many Tokens.

"Write me a bot that scans all markets between Polymarket and Kalshi, finds markets that are semantically similar and should settle around the same event, sets no-arbitrage boundaries, and automatically trades with low latency whenever an arbitrage opportunity arises"—this requires burning a huge pile of Tokens.

We found something interesting in practice.

If you invest enough Tokens to handle problems caused by scale and complexity, the Agent *will* solve them, no matter what. In other words, if you want to build something extremely complex, with many components and lines of code, as long as you throw enough Tokens at these problems, they will eventually be completely resolved.

There is one small but important exception.

Your problem cannot be too novel. At this stage, no amount of Tokens can solve the "novelty" problem. Enough Tokens can reduce errors from complexity to zero, but they cannot make an Agent invent something it doesn't know out of thin air.

This conclusion actually came as a relief to us.

We spent enormous effort, burned—a lot, a lot, a whole lot—of Tokens, trying to see if an Agent could reconstruct an institutional investment process with almost no guidance. This was partly to figure out how many years we (as quantitative researchers) have before being completely replaced by AI. It turned out the Agent couldn't get anywhere close to a decent institutional investment process. We believe this is partly because they have never seen such a thing—meaning, institutional investment processes simply don't exist in the training data.

So, if your problem is novel, don't count on solving it by stacking Tokens. You need to guide the exploration process yourself. But once you've defined the implementation plan, you can confidently stack Tokens for execution—no matter how large the codebase or how complex the components, it's not a problem.

Here's a simple heuristic: the Token budget should grow proportionally with the number of lines of code.

What Are the Extra Tokens Actually Doing?

In practice, additional Tokens typically improve the Agent's engineering quality in the following ways:

Allowing it to spend more time reasoning in the same attempt, giving it a chance to discover flawed logic itself. Deeper reasoning = better planning = higher probability of success on the first try.

Allowing it to make multiple independent attempts, exploring different solution paths. Some paths are better than others. Allowing more than one attempt lets it choose the best one.

Similarly, more independent planning attempts allow it to abandon weak directions and keep the most promising ones.

More Tokens allow it to critique its previous work with a fresh context, giving it a chance to improve instead of being stuck in a certain "reasoning inertia."

And, of course, my favorite: more Tokens mean it can use tests and tools for verification. Actually running the code to see if it works is the most reliable way to confirm the answer is correct.

This logic works because engineering failures of Agents are not random. They are almost always due to choosing the wrong path too early, not checking if this path actually works (early on), or not having enough budget to recover and backtrack after discovering a mistake.

That's the story. Tokens are literally the decision quality you buy. Think of it like research work: if you ask a person to answer a difficult question on the spot, the quality of the answer decreases as time pressure increases.

Research, at its core, is what produces the foundational "knowing the answer." Humans spend biological time to produce better answers; Agents spend more compute time to produce better answers.

How to Improve Your Agent

You might still be skeptical, but there are many papers supporting this, and honestly, the very existence of the "reasoning" adjustment knob is all the proof you need.

One paper I particularly like: researchers trained on a small, carefully curated set of reasoning examples, then used a method to force the model to keep thinking when it wanted to stop—specifically by appending "Wait" where it wanted to stop. This single change raised a certain benchmark from 50% to 57%.

I want to be as clear as possible: if you've been complaining that the code written by your Agent is mediocre, the single highest thinking level is likely still not enough for you.

I'll give you two very simple solutions.

Simple Method One: WAIT

The simplest thing you can start doing today: set up an automatic loop—after building, have the Agent review its work N times with a fresh context, fixing any issues found each time.

If you find this simple trick improves your Agent's engineering results, then you at least understand that your problem is just a matter of Token quantity—welcome to the Token burning club.

Simple Method Two: VERIFY

Have the Agent verify its own work early and often. Write tests to prove that the chosen path actually works. This is especially useful for highly complex, deeply nested projects—a function might be called by many other downstream functions. Catching errors upstream can save you a lot of subsequent compute time (Tokens). So, if possible, set up "verification checkpoints" throughout the entire build process.

Finished writing a piece? The main Agent says it's done? Have a second Agent verify it. Unrelated thought streams can cover sources of systematic bias.

That's basically it. I could write a lot more on this topic, but I believe just realizing these two things and implementing them well can solve 95% of your problems. I firmly believe in doing simple things extremely well, then adding complexity as needed.

I mentioned that "novelty" is a problem that can't be solved with Tokens, and I want to emphasize it again because you will eventually hit this pitfall and come crying to me saying stacking Tokens didn't work.

When the problem you want to solve isn't in the training set, *you* are the one who really needs to provide the solution. Therefore, domain expertise remains extremely important.

İlgili Sorular

QWhat is the core argument of the article regarding AI Agent output quality?

AThe core argument is that the quality of an AI Agent's output is directly proportional to the number of tokens you are willing to invest in the process.

QAccording to the article, what is the one type of problem that cannot be solved by simply using more tokens?

AProblems that are 'novel' or not present in the model's training data cannot be solved by any amount of tokens; they require human guidance and domain expertise.

QWhat are the two simple methods suggested in the article to immediately improve an AI Agent's performance?

AThe two simple methods are: 1. WAIT - Set up an automatic loop for the Agent to review its work multiple times with a fresh context and fix any issues found. 2. VERIFY - Have the Agent (or a second one) verify its work early and often by writing tests to prove the chosen path works.

QHow does the article suggest thinking about the relationship between tokens and decision quality?

AThe article suggests thinking of tokens as literally 'buying' decision quality, analogous to how human researchers spend biological time to produce better answers, an AI Agent spends computational time (tokens) to produce better answers.

QWhat heuristic does the article provide for determining a sufficient token budget for a task?

AThe article provides a simple heuristic: the token budget should grow proportionally with the number of lines of code required for the task.

İlgili Okumalar

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

DeepSeek, an AI startup founded by Liang Wenfeng, is reportedly in talks with Alibaba and Tencent for an external funding round that could value the company at over $20 billion. This marks a significant shift, as DeepSeek had previously relied solely on funding from its parent company,幻方量化 (Huanfang Quantitative), and had resisted external investment. The potential valuation would place DeepSeek among the top-tier AI model companies in China, comparable to competitors like MoonDark (valued at ~$18 billion) and ahead of recently listed firms like MiniMax and Zhipu. The funding—which could range from $600 million (for a 3% stake) to $2 billion (for 10%)—is seen as a move to secure resources for model development, retain talent, and support infrastructure needs, particularly as competition in inference models and AI agents intensifies. Both Alibaba and Tencent are eager to invest, not only for financial returns but also to integrate DeepSeek into their broader AI ecosystems. However, DeepSeek’s leadership is cautious about maintaining independence and may prefer financial investors over strategic ones to avoid being locked into a specific tech ecosystem. Alternative options, such as state-backed funds, offer longer-term capital and policy support but may come with slower decision-making and potential constraints on global expansion. With competing AI firms accelerating their IPO plans, DeepSeek’s window for securing optimal terms may be narrowing. The final decision will reflect a trade-off between capital, resources, and strategic independence.

marsbit39 dk önce

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

marsbit39 dk önce

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

After losing 97% of its market value since its 2018 peak, iQiyi is aggressively pivoting to AI in a desperate attempt to survive. At its 2026 World Conference, CEO Gong Yu announced an "AI Artist Library" with over 100 virtual performers and a new AIGC platform, "NaDou Pro," promising faster production and lower costs. This shift comes as the company faces severe financial distress: its market cap sits near delisting thresholds at $1.36 billion, with significant losses, declining membership revenue, and depleted cash flow. The AI strategy has sparked controversy. Top actors have issued legal threats against unauthorized digital replicas, while in Hengdian, over 134,000 background actors are seeing their already scarce job opportunities vanish as AI replaces them for background roles. iQiyi's move represents a fundamental shift from being a high-cost content buyer to a landlord" to becoming a "platform capitalist" that transfers production risk to creators. This contrasts with competitors like Douyin (TikTok's Chinese counterpart), which is investing heavily in *real* actor-led short dramas, betting that authentic human connection retains users better than AI-generated content. The article draws a parallel to the 1920s transition to "talkies," which made cinema musicians obsolete but ultimately enriched the art form. In contrast, iQiyi's AI drive is framed not as an artistic evolution but as a cost-cutting measure that could degrade storytelling, replacing genuine human emotion with algorithmically calculated stimulation and potentially numbing audiences' capacity for empathy. The core question remains: can a company focused solely on financial survival preserve the art of storytelling?

marsbit42 dk önce

After Losing 97% of Its Market Value, iQiyi Attempts to Use AI to Forcefully Extend Its Lifespan

marsbit42 dk önce

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

The CLARITY Act, which passed the House in July 2025 with strong bipartisan support (294-134), faces a critical juncture in the Senate. The Senate Banking Committee is expected to hold a markup soon, but key issues remain unresolved, including stablecoin yield provisions, DeFi regulations, and securing full Republican committee support. Other contentious points involve the Blockchain Regulatory Certainty Act (BRCA), ethics amendments for government officials, and SEC-related matters. The legislative calendar is tight, with limited time before the midterm elections. If the committee markup is delayed beyond mid-May, the chances of passage in 2026 drop significantly. Senator Cynthia Lummis has warned that failure this year could delay comprehensive crypto market structure legislation until 2030 or later. Galaxy estimates the probability of the CLARITY Act becoming law in 2026 is only about 50%. The bill provides crucial regulatory clarity by defining jurisdictional boundaries between the SEC and CFTC, establishing a path for decentralization, and bringing digital commodity intermediaries under federal regulation. Its passage is seen as vital before potential power shifts in the next Congress, which could bring less favorable leadership to key committees. The timeline is compressed, and the bill must compete for floor time with other priorities like Iran authorization and DHS appropriations. Key hurdles include finalizing the stablecoin yield compromise text, addressing law enforcement concerns about BRCA, and navigating political dynamics around SEC nominations. The outcome of the Banking Committee markup and the level of bipartisan support will be critical indicators of its future success.

marsbit1 saat önce

Only a 50% Chance of Passing This Year, Can the CLARITY Bill Succeed Before the Midterm Elections?

marsbit1 saat önce

İşlemler

Spot
Futures
活动图片