Why Do I Feel Less Valuable the More I Use AI?
The article discusses the "Zhang Wenhong Paradox," named after a prominent Chinese doctor who refuses to integrate AI into hospital medical records. He argues that while he can leverage AI to review cases and spot its errors due to his decades of experience, young doctors who rely on AI from the start risk never developing the independent clinical judgment needed to verify AI's output. This highlights a broader anxiety among skilled professionals (programmers, lawyers, analysts): as AI handles 80% of execution work, they fear their remaining 20% of value may not justify their professional worth.
The core argument is that AI acts as a multiplier: it amplifies existing skills (10x) but cannot compensate for a fundamental lack of understanding (0 x 10 = 0). True skill in the AI era is redefined as judgment—the ability to define problems, think structurally, and verify AI outputs. The author warns against outsourcing thinking to AI; clear, structured input is crucial to avoid "garbage in, garbage out." Furthermore, AI tends to output average, consensus-based answers, so deep, first-principles understanding is needed to challenge its suggestions and avoid mediocrity.
Historically, tools like computers transformed professions (e.g., lawyers shifted from finding cases to crafting strategies). Similarly, AI is shifting human roles from "doers" to "validators" and "commanders" who integrate macro-strategy with micro-verification. The conclusion: this is the best era for independent thinkers who can leverage AI as a powerful tool, but it requires building a solid foundation of expertise to avoid becoming mere operators of the technology. The key is to "compete with AI in setting questions, not answering them."
marsbit01/19 12:08