Author: Systematic Long Short
Compiled by: Deep Tide TechFlow
Deep Tide Intro: The core argument of this article is just one sentence: The quality of an AI Agent's output is directly proportional to the number of Tokens you invest.
The author isn't speaking in general theoretical terms; instead, they provide two specific methods you can start using today and clearly define the boundary where throwing more Tokens won't help—the "novelty problem."
For readers currently using Agents to write code or run workflows, the information density and practicality are very high.
Introduction
Alright, you have to admit the title is quite eye-catching—but seriously, it's no joke.
In 2023, when we were using LLMs to run production code, people around us were stunned because the common belief at the time was that LLMs could only produce unusable garbage. But we knew something others didn't: the output quality of an Agent is a function of the number of Tokens you invest. It's that simple.
Run a few experiments yourself and you'll see. Have an Agent complete a complex, somewhat niche programming task—for example, implementing a convex optimization algorithm with constraints from scratch. First, run it at the lowest thinking level; then switch to the highest thinking level and have it review its own code to see how many bugs it can find. Try the medium and high levels. You'll visually see: the number of bugs decreases monotonically as the number of Tokens invested increases.
This isn't hard to understand, right?
More Tokens = Fewer errors. You can take this logic a step further; this is essentially the (simplified) core idea behind code review products. In a completely new context, invest a massive number of Tokens (for example, have it parse the code line by line, judging whether each line has a bug)—this can basically catch the vast majority, if not all, bugs. This process can be repeated ten times, a hundred times, each time examining the codebase from a "different angle," and you can eventually unearth all the bugs.
There's another empirical support for the view that "burning more Tokens improves Agent quality": those teams that claim to use Agents to write code from start to finish and push it directly to production are either the foundational model providers themselves or extremely well-funded companies.
So, if you're still struggling to get production-level code from your Agent—to be blunt, the problem lies with you. Or rather, with your wallet.
How to Tell If You're Burning Enough Tokens
I wrote an entire article saying the problem definitely isn't your framework (harness), that "keeping it simple" can still produce excellent results, and I still stand by that view. You read that article, followed the advice, but were still greatly disappointed by the Agent's output. You sent me a DM, saw I read it but didn't reply.
This article is the reply.
Your Agent performs poorly and can't solve the problem, most of the time, simply because you're not burning enough Tokens.
How many Tokens are needed to solve a problem depends entirely on the problem's scale, complexity, and novelty.
"What's 2+2?" doesn't require many Tokens.
"Write me a bot that scans all markets between Polymarket and Kalshi, finds markets that are semantically similar and should settle around the same event, sets no-arbitrage boundaries, and automatically trades with low latency whenever an arbitrage opportunity arises"—this requires burning a huge pile of Tokens.
We found something interesting in practice.
If you invest enough Tokens to handle problems caused by scale and complexity, the Agent *will* solve them, no matter what. In other words, if you want to build something extremely complex, with many components and lines of code, as long as you throw enough Tokens at these problems, they will eventually be completely resolved.
There is one small but important exception.
Your problem cannot be too novel. At this stage, no amount of Tokens can solve the "novelty" problem. Enough Tokens can reduce errors from complexity to zero, but they cannot make an Agent invent something it doesn't know out of thin air.
This conclusion actually came as a relief to us.
We spent enormous effort, burned—a lot, a lot, a whole lot—of Tokens, trying to see if an Agent could reconstruct an institutional investment process with almost no guidance. This was partly to figure out how many years we (as quantitative researchers) have before being completely replaced by AI. It turned out the Agent couldn't get anywhere close to a decent institutional investment process. We believe this is partly because they have never seen such a thing—meaning, institutional investment processes simply don't exist in the training data.
So, if your problem is novel, don't count on solving it by stacking Tokens. You need to guide the exploration process yourself. But once you've defined the implementation plan, you can confidently stack Tokens for execution—no matter how large the codebase or how complex the components, it's not a problem.
Here's a simple heuristic: the Token budget should grow proportionally with the number of lines of code.
What Are the Extra Tokens Actually Doing?
In practice, additional Tokens typically improve the Agent's engineering quality in the following ways:
Allowing it to spend more time reasoning in the same attempt, giving it a chance to discover flawed logic itself. Deeper reasoning = better planning = higher probability of success on the first try.
Allowing it to make multiple independent attempts, exploring different solution paths. Some paths are better than others. Allowing more than one attempt lets it choose the best one.
Similarly, more independent planning attempts allow it to abandon weak directions and keep the most promising ones.
More Tokens allow it to critique its previous work with a fresh context, giving it a chance to improve instead of being stuck in a certain "reasoning inertia."
And, of course, my favorite: more Tokens mean it can use tests and tools for verification. Actually running the code to see if it works is the most reliable way to confirm the answer is correct.
This logic works because engineering failures of Agents are not random. They are almost always due to choosing the wrong path too early, not checking if this path actually works (early on), or not having enough budget to recover and backtrack after discovering a mistake.
That's the story. Tokens are literally the decision quality you buy. Think of it like research work: if you ask a person to answer a difficult question on the spot, the quality of the answer decreases as time pressure increases.
Research, at its core, is what produces the foundational "knowing the answer." Humans spend biological time to produce better answers; Agents spend more compute time to produce better answers.
How to Improve Your Agent
You might still be skeptical, but there are many papers supporting this, and honestly, the very existence of the "reasoning" adjustment knob is all the proof you need.
One paper I particularly like: researchers trained on a small, carefully curated set of reasoning examples, then used a method to force the model to keep thinking when it wanted to stop—specifically by appending "Wait" where it wanted to stop. This single change raised a certain benchmark from 50% to 57%.
I want to be as clear as possible: if you've been complaining that the code written by your Agent is mediocre, the single highest thinking level is likely still not enough for you.
I'll give you two very simple solutions.
Simple Method One: WAIT
The simplest thing you can start doing today: set up an automatic loop—after building, have the Agent review its work N times with a fresh context, fixing any issues found each time.
If you find this simple trick improves your Agent's engineering results, then you at least understand that your problem is just a matter of Token quantity—welcome to the Token burning club.
Simple Method Two: VERIFY
Have the Agent verify its own work early and often. Write tests to prove that the chosen path actually works. This is especially useful for highly complex, deeply nested projects—a function might be called by many other downstream functions. Catching errors upstream can save you a lot of subsequent compute time (Tokens). So, if possible, set up "verification checkpoints" throughout the entire build process.
Finished writing a piece? The main Agent says it's done? Have a second Agent verify it. Unrelated thought streams can cover sources of systematic bias.
That's basically it. I could write a lot more on this topic, but I believe just realizing these two things and implementing them well can solve 95% of your problems. I firmly believe in doing simple things extremely well, then adding complexity as needed.
I mentioned that "novelty" is a problem that can't be solved with Tokens, and I want to emphasize it again because you will eventually hit this pitfall and come crying to me saying stacking Tokens didn't work.
When the problem you want to solve isn't in the training set, *you* are the one who really needs to provide the solution. Therefore, domain expertise remains extremely important.





