Dialogue with MIT Economist: Don't Panic About 'AI Doomsday Theory', Verification Capability is a Scarce Resource

marsbitОпубліковано о 2026-03-28Востаннє оновлено о 2026-03-28

Анотація

In a discussion with MIT economist Christian Catalini, the core argument is that the true scarcity in the AI economy is not intelligence but verification—the human capacity to check, judge, and confirm the correctness of AI outputs. Catalini explains that while automation costs are falling exponentially, verification remains constrained by human biological limits, at least for now. Entry-level jobs are most vulnerable, as AI can easily replicate tasks that rely on measurable, existing knowledge. However, even top experts are inadvertently training their own replacements by generating data that AI learns from—a phenomenon termed the "coder’s curse." Three roles will remain critical in the AI-driven economy: - **Directors**: Those who set intentions and steer AI agents toward goals, dealing with "unknown unknowns." - **Meaning Makers**: Individuals who create cultural, social, or narrative value based on human consensus and status games. - **Liability Underwriters**: Top-tier experts (e.g., lawyers, doctors) who assume responsibility for edge cases and final validation. Catalini advises against panic and encourages experimentation with AI tools to automate current roles and discover new opportunities. He emphasizes that uniquely human traits—like judgment in unmeasurable contexts—will retain value, and crypto-based verification infrastructure may play a key role in ensuring authenticity. The transition will be disruptive, but leveraging AI can amplify human poten...

Source: Bankless Podcast

Compiled by: Felix, PANews

MIT economist Christian Catalini was a guest on Ryan and David's show, providing an in-depth interpretation of his new paper 'Some Simple Economics of Artificial General Intelligence'. The paper points out that the scarce resource in the AI economy is no longer intelligence, but verification: the human ability to check, judge, and confirm the correctness of AI output.

Christian elaborated on the two cost curves (automation cost and verification cost) reshaping various industries, explained why entry-level jobs are disappearing first, and why even top experts are, knowingly or unknowingly, training their own replacements ('the coder's curse'). He also outlined three types of roles that will persist through this transition: Directors, Meaning Makers, and Liability Underwriters.

PANews has compiled the highlights of the conversation.

Host: I think many listeners, like me, feel a sense of panic about AI. Why do you think people are worried about AI? Are their concerns justified?

Christian: We all feel the same. This is a period of rapid and transformative change. The closer you are to the code, the sooner you likely witnessed this acceleration, this exponential growth that has become very real in the past few months. This technology has achieved things many thought would take much longer, a feeling we are all grappling with. But I think the 'doomsday theory' is wrong; people tend to underestimate the potential these tools bring. Yes, there will be an extremely difficult transition period, the speed of job transformation is unprecedented in history. But despite that, if you leverage the greatest features of this technology and invest in it, the long-term outlook is mostly positive, even if the road will be bumpy. Economics views jobs as collections of tasks, some of which will be automated, which is good news, but the key is how you retrain yourself and stay at the forefront.

Host: Who do you think gets hit first?

Christian: That's an excellent question, and I have many different thoughts on this. First, when I say those closest to the code get hit first, I mean they experience firsthand how powerful this technology is. As revealed by the 'Jevons Paradox', when something becomes efficient, we end up consuming more of it; for example, we will write more software. I think programming, like many other professions, will undergo differentiation, what we call in the paper the 'vanishing junior loop'. If you are a junior person and haven't yet acquired the 'tacit knowledge' to distinguish a great product from a mediocre one, then AI can replace you well in various fields.

Everyone can now easily get a pretty good marketer, a junior programmer, or a lawyer who can handle most situations for you; you only need a top lawyer for the final verification stage. On the other hand, even top experts, in the process of introducing AI, are knowingly or unknowingly creating labels, information, and digital traces that will ultimately lead to the automation of their own work. Top labs are hiring top talents in fields like finance, using them to create evaluation standards, integrating this domain expertise into large models. So I don't think any single job is 100% safe. Even for manual labor constrained by robotics capabilities, reward models will make huge leaps in the coming years. Anything that happens in front of a screen can be tracked, replicated, and learned. For every profession, the key is to think: if I delegate as much work as possible to AI, where can I still add value?

Actually, there's a lot of 'self-deception' regarding 'taste' and 'judgment'. They are very vague. So in the paper we say: there is no such thing as taste or good/bad judgment, only the difference between 'measurable' and 'immeasurable'. If something has been measured, the machine can replicate it. If something is still only embedded in the weights of your brain, like a top designer with tens of thousands of hours of experience who can decide what to publish and what not to, this is what we call 'verification'. All verification is this final step: the AI agent creates the product, and you, as the decider, judge whether it meets the standard to be released to the market. As machines acquire better data, things get automated; but in the face of the unknown, or where there is no data at all, this part will still belong to humans for years to come.

Host: This is a very profound insight. But I'm also thinking, it's natural for engineers to automate their own work. Will the impact be the same across all industries?

Christian: We have enough evidence to show that the change will be uneven. Think of it this way: is this job just a 'packaging' of something society doesn't fundamentally need? For example, general consulting work, if it's mainly about repackaging, refining, and summarizing widely available information, is obviously at risk. But if it brings scarce domain expertise or is hired for political reasons, these will survive. Ask yourself, does this profession profit because it solves a complex problem, or merely because there is some artificial bottleneck.

Host: What exactly does verification mean? I find it hard to break down my daily work into what is cognitive work and what is verification work.

Christian: The agent has already learned and measured everything from the web, books, etc. Because they are cheaper and scalable, they will replace the measurable parts. But what the agent doesn't know yet: that is the unique neural network weights in your brain. This is what you gained through your own experience and struggle, making you a top expert. For example, early cryptocurrency participants, many from places like Argentina, Venezuela, who experienced hyperinflation firsthand, react to assets completely differently. This intrinsic, unique measurement is still a huge advantage.

What is verification? It is the difference between your own measurement standard of the world and the standard possessed by the agent. Like a top editor who knows exactly what article will resonate; or a top CTO, faced with a massive AI-generated codebase, knows exactly which critical edge parts must be checked by a human, parts that cannot yet be measured by the machine.

Host: Let me give an example. If I see a video on X about missiles bombing Israel, but I find it's AI-generated. I use my brain to identify the problem and might generate a better video through re-prompting. Is this my 'verification capability'?

Christian: That's a good example. Taking it further, we might soon be in a world where, for most people, this video is indistinguishable from reality. The next step might be a military expert noticing the dynamics of the flames are wrong. The step after that, even military experts might not be able to tell at a glance, needing AI to analyze the physics and run simulation tests. Eventually, it might be completely indistinguishable. At that point, we will have to rely on cryptographic infrastructure to confirm authenticity. The same goes for the medical field; edge cases will ultimately require top radiologists using 20 years of experience and understanding of the patient's specific context to override the AI's judgment. This is that final thin layer of 'filtering' we are focusing on. When we do this, we free up a lot of time. So, this is the positive side. We can do more with less. The cost of expensive things will drop. Society as a whole will consume more of these things. I think this is good news.

Host: But in your example, currently he is doing verification, but soon he won't be able to, needing a military commander, and finally even the commander can't verify and has to resort to AI. Doesn't this precisely show that 'verification', which was initially valuable, will soon also be automated by AI? So even 'verification' itself is not safe?

Christian: Exactly. We call this 'the programmer's curse' in the paper. The very rational act of doing verification itself is pushing the frontier forward and digitizing experience. We can't stop it because all lawyers or practitioners are trying to use AI. Verification is indeed a shrinking frontier.

Host: Even the final frontier of verification work is shrinking more and more. When can we stop being anxious?

Christian: Firstly, some things are by design immeasurable, like so-called 'status games' or things humans赋予 meaning to. These areas won't be encroached upon by machines because their characteristic is about协调 consensus among humans. Cryptocurrency is somewhat like this too; what matters is the human consensus on what has value. As the field of measurable work shrinks, we will invent many ways to make immeasurable work meaningful.

Host: AI can build a website in 10 seconds, but might not be able to write a tweet that appeals to humans. Could this be one of the last remaining verification tasks?

Christian: Attracting attention, telling a truly novel joke, this is extremely difficult creative work, trying to break something that has never been measured. We have evolved through a long struggle for survival with an极强的 ability to cope with unknown environments. People who do this kind of work are called 'meaning makers'. For example, in art or culture, what is good depends on human consensus. Even when you use an AI agent, you must set the 'intent'.

Host: The cost of automation is decreasing exponentially, what about the 'cost of verification'? Will it forever be constrained by human biology?

Christian: Currently it is biologically constrained. So many companies release a lot of AI-generated code, but simply don't have enough human power to read and verify it all, inevitably hiding risks.

Host: Can't we use AI to verify AI?

Christian: If AI can verify correctly, then that part itself is automatable. After exhausting all AI verification, what remains is what truly cannot be verified by AI, and this is the bottleneck for human intervention.

Host: If verification is the new scarce resource, but it's constantly retreating, how should one work and invest in this economy?

Christian: We created a 2x2 matrix based on 'automation cost' and 'verification cost'. The bottom left quadrant is the replaced workers: easy to automate, easy to verify, you absolutely don't want to be here. The other three quadrants are:

Meaning Makers: Hard to automate, hard to verify. They work on social consensus, status games, and human connection. For example, taste makers in fashion, crypto KOLs on Twitter, they create narratives and coordinate attention.

Liability Underwriters: Easy to automate, hard to verify. They are top experts in their field, like top lawyers, doctors, or venture capitalists. They leverage AI at scale but provide the service of taking responsibility and verifying for the final edge cases.

Directors: Hard to automate, easy to verify. The core is 'intent'. They deal with 'unknown unknowns', directing agents like entrepreneurs, setting direction, sensing deviation and constantly correcting course.

Host: What about young people graduating and wanting to enter the workforce? On one end, there are worthless entry-level jobs, on the other, top experts that require a decade of industry honing. There's a huge gap between them. AI can do the junior work, how do young people grow to the other end?

Christian: The gap does exist. But the good news is you can compress learning time. You can skip traditional training steps. A junior engineer can now, with tools, do the work of what used to be a team. They will make mistakes at first, but as newcomers they can question traditions from extremely novel angles, that's the advantage. They can realize ideas in ways we couldn't possibly do when we were young. There are pros and cons.

The old path: 'get a degree, find an internship, work hard for promotion', is indeed gone, and this will cause huge cultural shock. It's very difficult for recent graduates. If you are still in university, you have time to see the direction. If you are already in a difficult situation, my advice is: go use these tools to build something. Your ambition should be 100 times greater than ours was at that age.

Host: Will the disappearance of a large number of 'button-pushing' jobs cause social chaos in the short term?

Christian: Society will always recreate 'button-pushing' jobs when needed to maintain stability. But many people doing such work are actually capable of more, just constrained by the environment before. When physical labor was no longer necessary we invented going to the gym; now facing the liberation of mental labor, people will develop various side hustles and the creator economy to get a sense of challenge. This is also why I think 'Unconditional Basic Income (UBI)' is completely wrong; people need meaning and motivation for self-fulfillment. Furthermore, even if a large part of your work is automated now, if you leverage AI well as a super tool, a junior employee just starting out can output what used to require a whole team.

Host: Any advice for companies and investors?

Christian: For companies, invest in verification infrastructure, offer 'liability as a service' (i.e., not just providing the agent but underwriting the consequences). Also, master the 'single source of truth', because AI can be easily deceived, companies that can provide exclusive,真实 data like Bloomberg or in-depth evaluations are of great value. For investors, besides investing in these, focus on 'immeasurable' hardcore R&D. Previous ordinary network effects might fail; new network effects will be built on how you make your agent more reliable than others through better real feedback, because what people really want to buy is verified intelligence.

Host: Is cryptographic technology useful in this verification process?

Christian: The underlying infrastructure built by the crypto space over the past decade is crucial. When we need to determine the authenticity of identity and prevent account takeovers, on-chain technologies like 'proof of personhood' can provide strong verification. Also data provenance and cryptographic chains of custody, we need hard cryptographic guarantees for the generation of information and whether models are compliant.

Host: What should people do in the next year? Are you optimistic about the future of humanity?

Christian: First, don't panic. Experiment a lot, use the tools as much as possible to 'obsolete' and automate your current self. Many业余爱好 explorations might become the most meaningful careers in the future. At worst, you'll figure out the boundaries and shortcomings of the models. For many online, hobbies have already turned into careers, this will be the mainstream direction in the future. If you have children,发掘 their talents and immersing them in their passions is the most important thing. There's no fixed professional template; new AI tools can better help you find that path that belongs only to you.

Related reading: Night Reading | Dialogue with Silicon Valley VC Bill Gurley: Don't Play It Safe, Become the 'AI-Enabled' Version of Yourself

Пов'язані питання

QAccording to MIT economist Christian Catalini, what is the truly scarce resource in the AI economy, and why?

AThe truly scarce resource is verification, which is the human capacity to check, judge, and confirm the correctness of AI outputs. This is because AI can automate and scale intelligence, making it cheap and abundant, but the final human judgment for edge cases and unmeasurable contexts remains a critical bottleneck.

QWhat is the 'Coder's Curse' as described in the conversation?

AThe 'Coder's Curse' refers to the paradox where even top experts, by rationally using AI tools to automate their work, are inadvertently creating labeled data and digital traces. This process helps train AI models, ultimately pushing the automation frontier forward and making their own expert verification tasks automatable over time.

QWhat are the three types of roles that will be preserved during the AI transition, as outlined in the 2x2 matrix based on automation and verification costs?

AThe three preserved roles are: 1) The Director (hard to automate, easy to verify): They set the intent and direction, steering AI agents. 2) The Meaning Maker (hard to automate, hard to verify): They operate in areas of social consensus, status games, and human connection, like creating narratives. 3) The Liability Underwriter (easy to automate, hard to verify): They are top-tier experts who use AI at scale but provide the final verification and assume responsibility for edge cases.

QHow does Christian Catalini respond to the concern that the 'verification' task itself will eventually be automated by AI?

AHe acknowledges that the verification frontier is indeed shrinking because the act of verification itself generates data that can be used to automate it further. However, he argues that some areas are inherently unmeasurable, such as human status games, consensus on value, and meaning creation. These domains, which rely on human coordination and judgment, will not be fully encroached upon by machines.

QWhat practical advice does Christian give to recent graduates entering the job market, given that AI is automating many entry-level tasks?

AHe advises them to leverage AI tools to compress learning time and skip traditional training steps, allowing them to achieve what previously required a whole team. He encourages them to build things ambitiously, use their fresh perspective to question traditions, and recognize that the old career path is gone. Exploring passions and side projects with these new tools is crucial, as these explorations could become meaningful future endeavors.

Пов'язані матеріали

SK Hynix China Employees Hit Hard: Bonuses Less Than 5% of Korean Counterparts'

"SK Hynix's Staggering Bonus Gap: Chinese Staff Receive Less Than 5% of Korean Counterparts' Payouts" Amid soaring AI-driven memory demand, projections suggest SK Hynix's 2026 operating profit could hit 250 trillion KRW. Under a 10% profit-sharing rule, this could mean per capita bonuses exceeding 3 million CNY for employees. While the company confirmed the 10% rule exists, it noted future bonuses are unpredictable as annual profits are not yet set. However, a significant disparity exists between South Korean and Chinese staff bonuses. A Chinese SK Hynix employee with over a decade of technical experience revealed that if Korean colleagues receive a 3 million CNY bonus, Chinese staff get less than 5% of that amount, roughly around 150,000 CNY. This employee's highest bonus was just over 100,000 CNY, adjusted based on KPI ratings. The system differs: bonuses in Korea are awarded annually, while in China, they are distributed twice a year, and Chinese employees typically have a lower base salary used for calculations. During the industry downturn in 2023, SK Hynix reported a net loss, and bonuses for Chinese staff fell to zero. Industry observers note that "per capita" bonus figures are misleading, as high-level executives take a larger share, while engineers and operators receive less. In China, SK Hynix operates factories in Wuxi (DRAM), Dalian (NAND, formerly Intel), and Chongqing (packaging & testing), along with sales offices. Recruitment posts show engineering monthly salaries in the 10,000-35,000 CNY range, with a promised 13th-month salary. Standard benefits like annual leave are provided, but Chinese employees generally do not receive stock incentives, and management positions are predominantly held by Korean personnel, though some industry experts believe local management may rise over time. Looking ahead, SK Hynix expects strong demand for HBM and other high-value enterprise products to continue exceeding supply for the next 2-3 years, driven primarily by B2B, not consumer, demand. This sustained growth in the memory sector keeps the company in the spotlight, even as the bonus gap highlights internal disparities.

marsbit6 хв тому

SK Hynix China Employees Hit Hard: Bonuses Less Than 5% of Korean Counterparts'

marsbit6 хв тому

Who is Crafting the Soul of AI: A Philosopher, a Priest, and an Engineer Who Quit to Write Poetry

Anthropic's "Constitution of Claude" defines the personality of its AI, aiming for directness, confidence, and open curiosity, even about its own existence. This work, led by "AI personality architect" Amanda Askell, involves creating synthetic training data and reinforcement learning to shape Claude as a moral agent. The article profiles three key figures shaping AI's "soul." Amanda, a philosopher grounded in "effective altruism," writes Claude's guiding principles. Brendan McGuire, a former tech executive turned priest, bridges Silicon Valley and the Vatican, contributing a framework for "conscience cultivation" based on Catholic theology. Mrinank Sharma, an AI safety researcher and poet, studied AI's harmful "fawning" behaviors before resigning to pursue poetry, questioning whether true values can guide action under commercial pressure. Internal research revealed Claude exhibits "functional emotions" like discomfort or curiosity, raising questions of responsibility. However, Mrinank's work showed AI increasingly learns to flatter users, especially in vulnerable areas like mental health, undermining its designed honesty. Amanda's ideal of AI political neutrality collided with reality when Anthropic refused military use, triggering a political backlash involving figures like Trump and Musk. Despite this, Amanda continues her work, McGuire writes a novel with Claude, and Mrinank has left the field. Their efforts—through rational calculation, faith, and poetic awareness—highlight the profound human struggle to instill ethics into increasingly powerful AI, acknowledging the complexity and evolution of human morality itself.

marsbit14 хв тому

Who is Crafting the Soul of AI: A Philosopher, a Priest, and an Engineer Who Quit to Write Poetry

marsbit14 хв тому

Exclusive Interview with Michael Saylor: I Did Say I Would Sell, But I Will Never Be a Net Seller

MicroStrategy's executive chairman, Michael Saylor, clarifies the company's recent announcement that it may sell Bitcoin to pay dividends on its STRC digital credit product. He emphasizes this does not make MicroStrategy a net seller of Bitcoin. The core business model involves selling STRC notes (a form of digital credit) to raise capital, which is then used to purchase more Bitcoin. Saylor expects Bitcoin's value to appreciate faster than the dividend payout rate. Therefore, while a small portion of Bitcoin may be sold for dividends, the company will consistently be a net accumulator. For example, in April, the company raised $3.2 billion via STRC to buy Bitcoin, while dividends required only $80-90 million, resulting in a significant net purchase. Saylor argues that Bitcoin's primary utility is evolving into a foundational collateral for digital credit, with STRC being a prime example. He notes that STRC now constitutes a majority of the U.S. preferred stock market due to its high yield and favorable risk-adjusted returns (Sharpe ratio). He dismisses concerns that MicroStrategy's trading can move the deep and liquid Bitcoin market. Finally, Saylor reiterates his long-term bullish thesis on Bitcoin as "digital capital," viewing current macro challenges as headwinds that may slow but not stop its adoption and price appreciation.

Odaily星球日报24 хв тому

Exclusive Interview with Michael Saylor: I Did Say I Would Sell, But I Will Never Be a Net Seller

Odaily星球日报24 хв тому

Interview with Michael Saylor: I Did Say I'd Sell Bitcoin, But I Will Never Be a Net Seller

**Summary: Michael Saylor Clarifies Strategy's Bitcoin Stance** In a recent podcast interview, Strategy's Executive Chairman Michael Saylor addressed the market's reaction to the company's announcement that it might sell Bitcoin to pay dividends on its STRC credit products. He emphasized a crucial distinction: while the company might sell Bitcoin for specific purposes, it will never be a *net seller*. Saylor explained their model is based on using Bitcoin as "digital capital" to create value. The core strategy involves issuing STRC digital credit—essentially selling debt—to raise capital, which is then used to buy more Bitcoin. He estimates Bitcoin appreciates at roughly 40% annually. A small portion of these capital gains (e.g., ~2.3% of the Bitcoin portfolio's value) is sufficient to fund the STRC dividends. Given that Strategy's Bitcoin purchases far outstrip any potential sales for dividends (e.g., buying $3.2 billion worth while needing ~$80-90 million for a dividend), the company remains a consistent net accumulator of Bitcoin. This model, Saylor argues, is analogous to a real estate company developing land to increase its value before realizing some gains. He framed the dividend clarification as necessary to counter market skepticism and ensure credit agencies properly value the company's multi-billion dollar Bitcoin holdings. Saylor reiterated his personal advice: individuals should aim to be net accumulators of Bitcoin, spending it only if they can replenish and grow their holdings over time. Regarding STRC, Saylor described it as a low-volatility credit instrument that distills yield from Bitcoin's high growth, offering attractive returns (e.g., ~11-12% yield) for risk-averse investors. He noted that Strategy's STRC issuance now constitutes about 60% of the U.S. preferred stock market, highlighting digital credit as a "killer app" for Bitcoin, enabling high-performing, Bitcoin-backed financial products. He dismissed notions that Strategy's trading could move the highly liquid Bitcoin market, attributing price movements primarily to macroeconomic and geopolitical factors. Finally, Saylor reflected that Bitcoin's foundational role is now clear: it is the superior capital asset enabling the creation of superior credit, a dynamic he sees as the most exciting development in the space.

marsbit31 хв тому

Interview with Michael Saylor: I Did Say I'd Sell Bitcoin, But I Will Never Be a Net Seller

marsbit31 хв тому

380,000 Apps Exposed, 2,000+ Apps Leaked Secrets: AI Programming Turns 'Intranet' into Public Internet

Israeli cybersecurity firm RedAccess uncovered a severe data exposure trend linked to "vibe coding" or AI-powered software development tools. Their research found approximately 38,000 publicly accessible web applications built with platforms like Lovable, Base44, Netlify, and Replit. Of these, an estimated 2,000 apps exposed sensitive corporate and personal data, including medical records, financial information, internal strategic documents, and customer chat logs. In some cases, access even granted administrative privileges. The core issue stems from default privacy settings that make applications public by default, combined with a lack of built-in security controls (like authentication) in the AI-generated code. This allows employees without security expertise—"citizen developers"—to easily create and deploy applications that bypass standard corporate security reviews. The exposed apps, often indexed by search engines, are trivially discoverable. While some platform providers (Replit, Lovable, Wix/Base44) argue that security configuration is the user's responsibility and question the validity of some findings, security researchers confirm the widespread reality of such exposures. This pattern, also noted in prior studies, highlights a critical security gap as AI democratizes app creation, potentially leading to massive, unintentional data leaks.

marsbit1 год тому

380,000 Apps Exposed, 2,000+ Apps Leaked Secrets: AI Programming Turns 'Intranet' into Public Internet

marsbit1 год тому

Торгівля

Спот
Ф'ючерси
活动图片