Source: Bankless Podcast
Compiled by: Felix, PANews
MIT economist Christian Catalini was a guest on Ryan and David's show, providing an in-depth interpretation of his new paper 'Some Simple Economics of Artificial General Intelligence'. The paper points out that the scarce resource in the AI economy is no longer intelligence, but verification: the human ability to check, judge, and confirm the correctness of AI output.
Christian elaborated on the two cost curves (automation cost and verification cost) reshaping various industries, explained why entry-level jobs are disappearing first, and why even top experts are, knowingly or unknowingly, training their own replacements ('the coder's curse'). He also outlined three types of roles that will persist through this transition: Directors, Meaning Makers, and Liability Underwriters.
PANews has compiled the highlights of the conversation.
Host: I think many listeners, like me, feel a sense of panic about AI. Why do you think people are worried about AI? Are their concerns justified?
Christian: We all feel the same. This is a period of rapid and transformative change. The closer you are to the code, the sooner you likely witnessed this acceleration, this exponential growth that has become very real in the past few months. This technology has achieved things many thought would take much longer, a feeling we are all grappling with. But I think the 'doomsday theory' is wrong; people tend to underestimate the potential these tools bring. Yes, there will be an extremely difficult transition period, the speed of job transformation is unprecedented in history. But despite that, if you leverage the greatest features of this technology and invest in it, the long-term outlook is mostly positive, even if the road will be bumpy. Economics views jobs as collections of tasks, some of which will be automated, which is good news, but the key is how you retrain yourself and stay at the forefront.
Host: Who do you think gets hit first?
Christian: That's an excellent question, and I have many different thoughts on this. First, when I say those closest to the code get hit first, I mean they experience firsthand how powerful this technology is. As revealed by the 'Jevons Paradox', when something becomes efficient, we end up consuming more of it; for example, we will write more software. I think programming, like many other professions, will undergo differentiation, what we call in the paper the 'vanishing junior loop'. If you are a junior person and haven't yet acquired the 'tacit knowledge' to distinguish a great product from a mediocre one, then AI can replace you well in various fields.
Everyone can now easily get a pretty good marketer, a junior programmer, or a lawyer who can handle most situations for you; you only need a top lawyer for the final verification stage. On the other hand, even top experts, in the process of introducing AI, are knowingly or unknowingly creating labels, information, and digital traces that will ultimately lead to the automation of their own work. Top labs are hiring top talents in fields like finance, using them to create evaluation standards, integrating this domain expertise into large models. So I don't think any single job is 100% safe. Even for manual labor constrained by robotics capabilities, reward models will make huge leaps in the coming years. Anything that happens in front of a screen can be tracked, replicated, and learned. For every profession, the key is to think: if I delegate as much work as possible to AI, where can I still add value?
Actually, there's a lot of 'self-deception' regarding 'taste' and 'judgment'. They are very vague. So in the paper we say: there is no such thing as taste or good/bad judgment, only the difference between 'measurable' and 'immeasurable'. If something has been measured, the machine can replicate it. If something is still only embedded in the weights of your brain, like a top designer with tens of thousands of hours of experience who can decide what to publish and what not to, this is what we call 'verification'. All verification is this final step: the AI agent creates the product, and you, as the decider, judge whether it meets the standard to be released to the market. As machines acquire better data, things get automated; but in the face of the unknown, or where there is no data at all, this part will still belong to humans for years to come.
Host: This is a very profound insight. But I'm also thinking, it's natural for engineers to automate their own work. Will the impact be the same across all industries?
Christian: We have enough evidence to show that the change will be uneven. Think of it this way: is this job just a 'packaging' of something society doesn't fundamentally need? For example, general consulting work, if it's mainly about repackaging, refining, and summarizing widely available information, is obviously at risk. But if it brings scarce domain expertise or is hired for political reasons, these will survive. Ask yourself, does this profession profit because it solves a complex problem, or merely because there is some artificial bottleneck.
Host: What exactly does verification mean? I find it hard to break down my daily work into what is cognitive work and what is verification work.
Christian: The agent has already learned and measured everything from the web, books, etc. Because they are cheaper and scalable, they will replace the measurable parts. But what the agent doesn't know yet: that is the unique neural network weights in your brain. This is what you gained through your own experience and struggle, making you a top expert. For example, early cryptocurrency participants, many from places like Argentina, Venezuela, who experienced hyperinflation firsthand, react to assets completely differently. This intrinsic, unique measurement is still a huge advantage.
What is verification? It is the difference between your own measurement standard of the world and the standard possessed by the agent. Like a top editor who knows exactly what article will resonate; or a top CTO, faced with a massive AI-generated codebase, knows exactly which critical edge parts must be checked by a human, parts that cannot yet be measured by the machine.
Host: Let me give an example. If I see a video on X about missiles bombing Israel, but I find it's AI-generated. I use my brain to identify the problem and might generate a better video through re-prompting. Is this my 'verification capability'?
Christian: That's a good example. Taking it further, we might soon be in a world where, for most people, this video is indistinguishable from reality. The next step might be a military expert noticing the dynamics of the flames are wrong. The step after that, even military experts might not be able to tell at a glance, needing AI to analyze the physics and run simulation tests. Eventually, it might be completely indistinguishable. At that point, we will have to rely on cryptographic infrastructure to confirm authenticity. The same goes for the medical field; edge cases will ultimately require top radiologists using 20 years of experience and understanding of the patient's specific context to override the AI's judgment. This is that final thin layer of 'filtering' we are focusing on. When we do this, we free up a lot of time. So, this is the positive side. We can do more with less. The cost of expensive things will drop. Society as a whole will consume more of these things. I think this is good news.
Host: But in your example, currently he is doing verification, but soon he won't be able to, needing a military commander, and finally even the commander can't verify and has to resort to AI. Doesn't this precisely show that 'verification', which was initially valuable, will soon also be automated by AI? So even 'verification' itself is not safe?
Christian: Exactly. We call this 'the programmer's curse' in the paper. The very rational act of doing verification itself is pushing the frontier forward and digitizing experience. We can't stop it because all lawyers or practitioners are trying to use AI. Verification is indeed a shrinking frontier.
Host: Even the final frontier of verification work is shrinking more and more. When can we stop being anxious?
Christian: Firstly, some things are by design immeasurable, like so-called 'status games' or things humans赋予 meaning to. These areas won't be encroached upon by machines because their characteristic is about协调 consensus among humans. Cryptocurrency is somewhat like this too; what matters is the human consensus on what has value. As the field of measurable work shrinks, we will invent many ways to make immeasurable work meaningful.
Host: AI can build a website in 10 seconds, but might not be able to write a tweet that appeals to humans. Could this be one of the last remaining verification tasks?
Christian: Attracting attention, telling a truly novel joke, this is extremely difficult creative work, trying to break something that has never been measured. We have evolved through a long struggle for survival with an极强的 ability to cope with unknown environments. People who do this kind of work are called 'meaning makers'. For example, in art or culture, what is good depends on human consensus. Even when you use an AI agent, you must set the 'intent'.
Host: The cost of automation is decreasing exponentially, what about the 'cost of verification'? Will it forever be constrained by human biology?
Christian: Currently it is biologically constrained. So many companies release a lot of AI-generated code, but simply don't have enough human power to read and verify it all, inevitably hiding risks.
Host: Can't we use AI to verify AI?
Christian: If AI can verify correctly, then that part itself is automatable. After exhausting all AI verification, what remains is what truly cannot be verified by AI, and this is the bottleneck for human intervention.
Host: If verification is the new scarce resource, but it's constantly retreating, how should one work and invest in this economy?
Christian: We created a 2x2 matrix based on 'automation cost' and 'verification cost'. The bottom left quadrant is the replaced workers: easy to automate, easy to verify, you absolutely don't want to be here. The other three quadrants are:
Meaning Makers: Hard to automate, hard to verify. They work on social consensus, status games, and human connection. For example, taste makers in fashion, crypto KOLs on Twitter, they create narratives and coordinate attention.
Liability Underwriters: Easy to automate, hard to verify. They are top experts in their field, like top lawyers, doctors, or venture capitalists. They leverage AI at scale but provide the service of taking responsibility and verifying for the final edge cases.
Directors: Hard to automate, easy to verify. The core is 'intent'. They deal with 'unknown unknowns', directing agents like entrepreneurs, setting direction, sensing deviation and constantly correcting course.
Host: What about young people graduating and wanting to enter the workforce? On one end, there are worthless entry-level jobs, on the other, top experts that require a decade of industry honing. There's a huge gap between them. AI can do the junior work, how do young people grow to the other end?
Christian: The gap does exist. But the good news is you can compress learning time. You can skip traditional training steps. A junior engineer can now, with tools, do the work of what used to be a team. They will make mistakes at first, but as newcomers they can question traditions from extremely novel angles, that's the advantage. They can realize ideas in ways we couldn't possibly do when we were young. There are pros and cons.
The old path: 'get a degree, find an internship, work hard for promotion', is indeed gone, and this will cause huge cultural shock. It's very difficult for recent graduates. If you are still in university, you have time to see the direction. If you are already in a difficult situation, my advice is: go use these tools to build something. Your ambition should be 100 times greater than ours was at that age.
Host: Will the disappearance of a large number of 'button-pushing' jobs cause social chaos in the short term?
Christian: Society will always recreate 'button-pushing' jobs when needed to maintain stability. But many people doing such work are actually capable of more, just constrained by the environment before. When physical labor was no longer necessary we invented going to the gym; now facing the liberation of mental labor, people will develop various side hustles and the creator economy to get a sense of challenge. This is also why I think 'Unconditional Basic Income (UBI)' is completely wrong; people need meaning and motivation for self-fulfillment. Furthermore, even if a large part of your work is automated now, if you leverage AI well as a super tool, a junior employee just starting out can output what used to require a whole team.
Host: Any advice for companies and investors?
Christian: For companies, invest in verification infrastructure, offer 'liability as a service' (i.e., not just providing the agent but underwriting the consequences). Also, master the 'single source of truth', because AI can be easily deceived, companies that can provide exclusive,真实 data like Bloomberg or in-depth evaluations are of great value. For investors, besides investing in these, focus on 'immeasurable' hardcore R&D. Previous ordinary network effects might fail; new network effects will be built on how you make your agent more reliable than others through better real feedback, because what people really want to buy is verified intelligence.
Host: Is cryptographic technology useful in this verification process?
Christian: The underlying infrastructure built by the crypto space over the past decade is crucial. When we need to determine the authenticity of identity and prevent account takeovers, on-chain technologies like 'proof of personhood' can provide strong verification. Also data provenance and cryptographic chains of custody, we need hard cryptographic guarantees for the generation of information and whether models are compliant.
Host: What should people do in the next year? Are you optimistic about the future of humanity?
Christian: First, don't panic. Experiment a lot, use the tools as much as possible to 'obsolete' and automate your current self. Many业余爱好 explorations might become the most meaningful careers in the future. At worst, you'll figure out the boundaries and shortcomings of the models. For many online, hobbies have already turned into careers, this will be the mainstream direction in the future. If you have children,发掘 their talents and immersing them in their passions is the most important thing. There's no fixed professional template; new AI tools can better help you find that path that belongs only to you.
Related reading: Night Reading | Dialogue with Silicon Valley VC Bill Gurley: Don't Play It Safe, Become the 'AI-Enabled' Version of Yourself





