A new paper titled "The Minimalist Economics of AGI" is being widely circulated. For this, we engaged in a conversation with the paper's authors, covering:
· Automation vs. Verification: The Core Economic Divide
· Why AI Agents Now Feel Like Colleagues, What's Happening to Junior Roles, and the "Coder's Curse"
· The Value of "Meaning Makers," Consensus, and Status Economies
· Why Cryptocurrency Could Become Key Infrastructure for Identity, Provenance, and Trust
· Two Possible Futures: A Hollowed-Out Economy vs. An Augmented Economy
This episode features Christian Catalini, founder of the MIT Cryptoeconomics Lab, and Eddy Lazzarin, CTO of a16z crypto, in conversation with Robert Hackett, delving into how automation is reshaping the labor market and the nature of intelligence.
What do these changes mean for startups, the future of work, and your career?
Here is the conversation:
Robert Hackett: Hello everyone. Today we have Christian Catalini, co-founder of Lightspark and founder of the MIT Cryptoeconomics Lab, and also Eddy Lazzarin from a16z crypto.
We're going to discuss Christian's newly published paper, "The Minimalist Economics of AGI."
My first question: What prompted you to start researching the economic relationship between AI and the real world?
Christian Catalini: I'd say it stemmed from a semi-existential crisis. We're all facing rapid technological advancement and how quickly everything is changing.
I'm an optimist, but the core questions are always: What should we do? What should we focus on? What is worth our time, energy, and attention?
A few months ago, we wrote an article about measurement. The core idea was: Anything that can be measured will eventually be automated. That doesn't sound like good news. The core of this second paper is: If this assumption holds true, and we push it to the extreme, what happens?
What will the economy look like? What will be the nature of labor? What should startups do? What should existing giants do? Ultimately, what will the future look like?
Some judgments will be right, some will be wrong. Hopefully, we're on the right track. The paper is now public, and we're seeing which points resonate and which don't.
Robert: You said this stemmed from a semi-existential crisis?
Christian: My main takeaways are threefold. First, this technology is still within our control. Second, its positive value is orders of magnitude greater than the pessimists claim. Third, I think we all have a guide to action.
We can think: Where do we create value? What kind of things do we do in our jobs? Work is often a bundle of tasks. When some of those tasks or parts of the job are automated, people get very anxious.
I think programming is going through this process now: many talented people who have written elegant, excellent code over the past few decades are now finding, 'Wow, AI is doing my job.'
AI Agents: From Tools to Colleagues
Robert: I want to dig deeper. We also have Eddy Lazzarin with us today, who has been CTO at a16z crypto for several years. Eddy, how do you view these changes?
Eddy Lazzarin: Let me first set the timeline alongside the paper's context. Many people felt that a qualitative change occurred around December 2025. The change is that a series of incremental improvements in agent capabilities reached a tipping point: AI agents can now perform long-horizon tasks.
A year ago, it felt like: I ask an agent to do one small thing, it does it great, but I have to give the next instruction, step by step.
Now, you can give it less guidance. Maybe it's not perfect, but suddenly, it's like working with a person.
You don't have to break things down extremely finely and follow up step by step; that's extreme micromanagement. Now you just chat clearly, it goes off and does it, and comes back with results a day or two later. This qualitative change unleashes huge imagination, and everyone is starting to face this reality.
This confrontation is partly an emotional rollercoaster, but the more interesting part is: How to maximize value in real production and business scenarios.
People are gradually discovering: AI can produce an enormous amount of work, some results are outstanding, taking a fraction of the time. But it often has subtle flaws that weren't fully appreciated before.
For example, software engineering work is being redefined. People used to think software engineering was sitting down and writing a bunch of code: thinking about the problem, understanding requirements, then writing code. The code was the output.
But the reality is, AI is helping us better deconstruct and understand this. It's a very精细的, iterative process of correction, collecting feedback, and integration, not just line-by-line coding. It's a holistic task. So, the focus of good engineers is shifting rapidly.
The process of experimenting, guiding, and taking risks is what Christian calls verification in the paper.
The change is that the proportion of effort spent on line-by-line coding is becoming minuscule, almost zero in some extreme 'Vibe Coding' scenarios. Now, the vast majority of the work is verification.
Automation vs. Verification: The Core Economic Divide
Christian: The automation part is intuitive. Agents can essentially do more of what people did before. But currently, they are still somewhat limited by the observable domain. All the codebases they learned from during training or fine-tuning are their foundation.
Many people will say, 'Then they can't innovate, they have no creativity, no taste.'
I completely disagree. In fact, innovation is largely just the recombination of ideas. Humans have probably only explored a tiny fraction of the possible combinations between disciplines. So I believe that just by leveraging the knowledge we give them, these agents will be highly innovative.
In the new economy, verification is a significant cost. What is verification cost? It starts with the concept of measurement. If you agree that AI is very good at replicating processes where data exists, then you start to ask: What is still immeasurable today?
Some things are immeasurable because they are inherently unmeasurable. Economists call this Knightian uncertainty, named after economist Frank Knight.
Simply put, it's the difference between being able to assign probabilities to future events and being completely unable to assign probabilities.
Robert: For those without an economics background, they might be more familiar with Donald Rumsfeld's 'unknown unknowns'.
Christian: Yes.
Unknown unknowns are essentially the unmeasurable part, usually related to the future. This is why, even if you throw an agent into the stock market, it might perform well on average—even better than your financial advisor—but it likely can't handle dramatic changes in the environment, like geopolitical shifts, etc. There are many more examples.
So in the paper, verification is essentially: the act of applying all the implicit metrics you've internalized from birth through your career as a human.
Two people might have very similar knowledge and professional experience, but their combined judgment will never be exactly the same. When people say 'this person has great taste,' 'is an excellent curator,' 'has good judgment'... One inspiration for this paper was: everyone is finding various excuses to comfort themselves, like 'machines will never be able to do X, Y, Z.'
But these excuses are vague. How do you define taste? How do you define good judgment? Worse, the judgment a good engineer needed three months ago is probably much more than what's needed now.
So we need to find something more fundamental, something that can be nailed down. Our conclusion is: as long as there is data behind it that can be used for automation, it will be automated.
Three Types of Human Roles in the Future Economy
Robert: In the near term, you categorize various tasks and roles in the economy into three types, looking at their degree of automability, or rather, their measurability in terms of output and behavior.
Christian: I think humans still have a lot of irreplaceable space in many dimensions. First, of course, is verification.
Right now, the leverage of any individual in their profession is enormous compared to before December 2025. This means we should all be more ambitious, rethinking existing workflows, what we call the AI sandwich.
A company or startup could have just one human, we call them the conductor, responsible for steering the verification direction, ensuring the system can be corrected when it deviates from expectations. The top layer might be one person, or a small team.
The middle layer will have a large number of agents. We're already seeing people trying all sorts of novel things.
The bottom layer will have a group of top verifiers. With the right tools, top experts in every field will be responsible for ensuring the system's output meets expectations. This is extremely important work. For a long time, domain experts will shine in this part.
But here's the bad news: When you are doing this work, you are also creating labeled data for your own replacement. We've seen the simplest version of this before: people labeling images for AI companies, participating in training; those jobs are no longer needed now.
Now, large foundational model labs are hiring top experts from various fields like finance. These people are creating evaluation standards and training data, which will ultimately replace their peers. So the verification layer is very important, many people will succeed in it, it rewards super-specialization. If you are the one who can provide the final unlock, your leverage is huge.
Robert: That's the first type. And this role of verifier, you call it the coder's curse.
Christian: The coder's curse is this mechanism: if you are a top verifier, you must constantly move up the value chain because the technology keeps getting better.
The conductor I mentioned earlier is essentially the person driving the intent. Entrepreneurs are conductors; they see the future and imagine a path to get there.
Then there is a category of work that we must acknowledge is easily automated. These positions have already disappeared or are about to disappear. Society hasn't really dealt with these impacts yet; there will be a huge need for retraining, pushing people towards more frontier knowledge areas.
People sometimes misunderstand the paper: we say human verification is the last step, but often, AI will verify AI. There will be a long chain of verification before it finally reaches a human.
There's another, hardest-to-define role, which we call meaning makers. These people are very good at understanding trends, social changes, issues society cares about, those things that require everyone to coordinate and reach consensus. Art is like this, and crypto networks are to some extent as well.
These meaning makers operate outside the measurable domain. People sometimes say these jobs require a 'human touch.' But I do think people severely overestimate the importance of this human touch. For example, psychological counseling, elderly care, child care.
I think people will have various concerns initially, but no one is really considering the massive drop in cost. If it becomes 100 times, 1000 times cheaper, people will quickly change their minds. In fact, we already know people are extensively using LLMs to answer very private, personal questions.
There's another type of work where 'human-made' will become a very important label. Cryptocurrency will play a key role here because without strong cryptographic technology, we would quickly lose the essence of this identity. But 'human-made' is valuable simply because human time and attention are scarce.
Not because it's better, but just because you know a human invested scarce time and attention to create that experience. These things will still matter.
Cryptocurrency's Place in the AI World: Identity, Provenance, Trust
Robert: You mentioned cryptography. What is cryptocurrency's place in this world?
Christian: Very important.
When we started researching, many people had already pointed out that large models and AI are probabilistic, while cryptocurrency is deterministic. You can imagine using smart contracts to set guardrails for agents, or giving agents the ability to buy and sell resources.
These logics hold. But I think there is a deeper complementarity between AI and cryptocurrency. Maybe it's not obvious in the economy today because the side effects haven't manifested yet, issues related to identity or the provenance of digital information.
I think in the coming months, as these capabilities truly become powerful, we will enter completely uncharted territory. Every digital platform will have to face the reality that content (posts, images, anything) that was once generated by humans could now come from an agent.
As this trend develops, society will have to completely restructure its identity systems. In an environment where trust is increasingly scarce, crypto primitives will shine in a vast number of applications. Everything built over the past decade will become more foundational. Back to verification: when the underlying information is on the blockchain, verification is cheaper, more reliable, more trustworthy.
Eddy: The cost of automation is plummeting. The broad verification cost we talked about is also decreasing, but not as fast, creating an interesting gap.
You can describe this gap in many ways; some would call it an opportunity. This is the crux of Christian's judgment on human labor: if there is such a bottleneck, a measurability gap arising from human general adaptability, experience, and generality, then humans might specialize in verification faster than machines can in the short term.
Machines do have some challenges with verification that are hard to handle in the short term. Long term, I don't think it's permanent, but certainly in the short term.
Cryptography and blockchain are verification tools. Proof of provenance is just a set of cryptographic evidence that something passed through certain people, certain paths, or underwent certain deterministic transformations. This gives us signals, making cross-category verification easier. So anything that makes verification simpler will help fill this gap.
The Hidden Cost of Automation: Systemic Risk and Liability
Eddy: Can we talk about the 'Trojan Horse' problem? We've talked about risks to workers, there's a lot more to say, but from the perspective of economic production efficiency, automation is extremely cheap. What risks does that pose to the economy?
Christian: We're already seeing signs. Many companies say X% of their code is now machine-generated.
Product release cycles are shorter. But at the same time, we know humans can't review all the code; it likely carries technical debt.
We've all had that temptation: ask an LLM a question, glance at the answer, and publish it as our own work without full verification, because the models are getting better. But whether it's incorrect sentences, wrong code, or vulnerabilities that eventually sneak into the codebase, I think we'll see more and more of these issues.
The paper's point is that releasing AI-generated code, copy, or any output with potential errors is a completely rational choice because you cannot fully verify it. Scaled up to the whole society, this means we might be accumulating some degree of systemic risk.
While development accelerates, hopefully we'll develop better verification tools to retrospectively review what we might have already released. But in the medium term, companies face this dilemma: investing in developing more robust verification tools (including cryptographic primitives) is expensive now and might slow down development. The benefits are realized in the future, but companies are eager to release products and grow.
So I think we'll see two types of founders: those focused on long-term responsibility, building the right way. We already see some signs of what could be called 'liability as software'. As we deploy agents as employees, liability and insurance issues will become increasingly important. It's not the most glamorous topic, but we will see systemic failures in reality.
Eddy: This idea is very interesting. Because if previous software production was primarily done directly by humans, you could assume that many steps had human observation and quality control. Not that there were never errors, but someone was touching every part along the way.
But as automation increases, risk increases, value increases. The stakes are also rising dramatically, which is why we're willing to tolerate it. But the ability to supervise, constrain, and understand risk boundaries must expand.
Therefore, introducing mechanisms like insurance, putting a value on the risk of failure, might become an important part of managing enterprises that cannot be fully supervised. You want to delegate the responsibility of quantifying risk and understanding problems to experts.
I find it interesting that even software development could acquire a completely new financial dimension it didn't have before.
Christian: Going back to cryptocurrency, everything we've built over the past decade has pushed the boundary of how we measure and weight risk. You can borrow from DeFi, prediction markets; these primitives suddenly become crucial.
If you're deploying software and agents, the technology stack that allows agents to see better signals is important. A simple example: I spoke with a founder working on agent trading and payments. He found that when he switched from traditional payment systems to stablecoin payments, the system performed more reliably because all signals were on-chain. The agent could better understand what was happening, rather than just calling an API with no feedback; it could see the full context of the behavior.
Another interesting point related to the insurance and liability you mentioned. Some say network effects will be a sustainable moat in the AI era. I think the reality is more nuanced. AI agents and autonomous systems are very good at breaking down many of the moats that make two-sided platforms defensible. The cost of launching these platforms, and the cost of cold-starting both sides of a market, is decreasing.
But another kind of network effect becomes more important: if you own critical proprietary data generated within your business, data that allows you to scale verification from humans to machines, you can better underwrite risk, make better decisions, and offer safer products at lower cost.
Therefore, when comparing incumbents and startups: incumbents with complete databases of failure cases will become extremely valuable. And startups focused on building positive feedback loops around verification (e.g., bringing in top experts, learning from decisions) will achieve huge success.
Eddy: This further proves that proprietary data might be one of the most defensive assets.
Two Futures: A Hollowed-Out Economy vs. An Augmented Economy
Robert: I have a question I'm very eager to explore. The paper mentions a hollowed-out economy and an augmented economy. Can you explain? What's the key difference?
Christian: Okay, let's start with the hollowed-out economy. There are already early signs. Tech companies will realize they can do more with fewer people.
Of course, they'll start with below-average or average employees, because AI can handle that; and young practitioners, because the capabilities of senior employees can now be extended 10x, 100x, depending on the task. This is one of the forces driving change.
The second thing we mentioned is the coder's curse. When experts do training, make decisions, they are essentially generating labeled data. This data can be used in the future to make the same decisions without the expert.
Finally, there's alignment drift. Simply put: you can't treat alignment as a one-time process, 'we trained the model, aligned it, done deal.' It's more like raising a child, requiring constant correction, continuous feedback.
Put these three dynamics together, plus the fact that the incentive to release unverified AI is extremely high because I get immediate productivity gains (e.g., '60% of code is machine-generated'), but part of the cost manifests in the future. We might rush towards an economy where we stop cultivating future verifiers.
Junior talent (our future top verifiers) is becoming scarcer. This group is shrinking. We are creating potential risks that could ultimately lead to what's called a hollowed-out economy.
Again, I'm an optimist. I think we will ultimately move towards an augmented economy. The question is how quickly we can get there and whether we can make the transition as smooth as possible for those who need retraining and adaptation.
The augmented economy is the opposite. We realize: junior talent is not being developed. But the good news is: AI is incredibly magical at accelerating mastery. You can discover a young person's true talent, rather than stuffing them into standardized curricula.
You want to accelerate their growth, help them find their true selves, what they truly love, what they can throw themselves into fully. At least that's how we think about our own children. No one knows what will be most valuable in the future, but if you build on true talent, your probability of success is much higher.
I think AI will play a huge role in this. These are excellent learning tools, we must build them, and I don't think there are scaled tools like this yet.
Second, back to the coder's curse: these people must constantly retrain, move up the value chain, discover 'I now have huge leverage, I can become a conductor.'
Many people have talked about the importance of agency. I think this hits the mark: you must realize you can be a conductor; you can do much more than before.
On the alignment front, through safety R&D and better verification tools, if we can augment our own capabilities, we can verify better, become true peers.
Putting this all together, you get a scenario: many things that were expensive in the past are now almost free. Anything measurable can be automated.
Then we'll invent new things. A host of new jobs, including status economies, unmeasurable economies, all built on a strong verification stack, so we have a basis in fact. We won't be flooded by fake identities, characters trying to launch Sybil attacks.
Overall, the future is quite bright. Many things governments have always wanted to do, like quality education, quality healthcare, might become cheap and ubiquitous.
But we must invest in building along the way, not just barely get through the transition, making extreme decisions like shutting down data centers. That's impossible and will never work.
Robert: So if you're early in your career, you should use these tools to simulate the environments you'll encounter, train yourself. If you're later in your career, you need a sense of urgency, realizing you can do more with less.
Eddy: It's hard to say how long all this will last until another wave of unpredictable change arrives. But human expertise lies in being able to see the big picture, oversee the entire project, know where more attention is needed, where more resources are needed, and how the entire project needs to adjust.
If I were a young person starting out today, I would indeed be a bit sad: the glory of spending a whole summer writing an extremely elegant, efficient program is gone. That's now a hobby.
But on the flip side, I would try to get my parents to give me some money to驾驭 a large group of computers, see if I can efficiently utilize $5000 worth of compute. For example, can I guide a large group of machines to accomplish something?
A meme has been circulating in tech for years: one person can start a billion-dollar startup. Isn't this how it's realized?
The skill of controlling a wide variety of machines and data, while maintaining a holistic view of things, has never been developed. It never made sense to develop this skill before.
But if you want to undertake a large project, you've always needed to learn how to mobilize many people; that was how you gained leverage. When the structure of the workforce changes, so does this approach. Now you need to learn to harness this new thing.
A new红利 has appeared. Learn to leverage it; that's the lesson for young people.
It's not over—that's ridiculous. You've just been told you have superpowers. What will you do?
Christian: To summarize simply, apprenticeships might be dead, but the real work is just beginning.
Many areas that were hard to break into before, like hardware, are now yours for the taking if you have the curiosity.
If I had to categorize, the most positive signal from this model is: the experiment cycle is compressed, people will truly be able to amplify their ideas quickly.
Investment Perspective: Small Teams, Big Value, The Inevitability of Crypto
Robert: Eddy, are you seeing this trend in the companies you evaluate for investment?
Eddy: Absolutely. We've already seen massive layoffs at companies like Block, X.
I haven't seen a formal analysis, but many crypto projects like Hyperliquid, Uniswap, are extremely valuable with fewer than 20 employees.
If you can start a company with just a few people, there will be a lot of companies in the future, right? If so, they will need to coordinate, and coordination is very complex.
You need reputation, you need identity, you need proof of data provenance, you need proof of payment type provenance. We talked about the insurance idea earlier.
And blockchain networks are very attractive precisely because they are credibly neutral. You don't have to worry about the specific reputation of the 50 billionth company you interact with; you just need to trust the smart contract and the verifiable AI model, ensuring the transaction happens as expected, payment completes as required.
I think this is almost inevitable. I believe blockchain will play a central role in this story.
Christian: I completely agree. We've been laying the tracks and infrastructure for this for a long time, and I think it will become much more useful.
Robert: Christian, after all this research and exploration, how do you incorporate these findings into your own work and life?
Christian: Honestly, we couldn't have written this paper without Gemini, ChatGPT, Grok, Claude. They are excellent co-authors. Of course, they occasionally go astray, persistently deleting paragraphs we need.
We even left some Easter eggs for the LLMs in the paper. I was chatting with Gemini, and it said it liked this Easter egg and made a very witty comment.
In that moment you can really feel the intelligence. It's not generic; it's creative. That was a标志性 moment: you feel it's a peer, not a tool.
Robert: Good. If anyone wants to read the paper, the title is "The Minimalist Economics of AGI." I highly recommend you check it out. It contains some real insights that might affect your life and how you should respond to the future.






