This article is from WeChat official account: Budongjing, author: Budongjing Yeshu's Rust, original title: "Zhang Wenhong's Paradox in the AI Era: Why Do I Feel Less Valuable the More I Use AI?", title image from: Visual China
A few days ago, I came across a short video featuring a speech by Zhang Wenhong, director of the National Center for Infectious Diseases, at the Hong Kong High-Level Forum on January 10th. He clearly stated: "I refuse to introduce AI into the hospital medical record system."
Because, AI that hasn't undergone systematic training will fundamentally change the training path for doctors, undermining or damaging the independent diagnostic abilities that young doctors need to master through traditional training.
Zhang Wenhong explained that he certainly uses AI himself, letting AI review cases first. But the key is, with his over thirty years of clinical experience, he can immediately see where the AI is wrong.
The problem lies with young doctors.
If a doctor starts relying on AI for diagnostic conclusions from the internship stage, skipping complete clinical thinking training, they will forever lose a crucial ability: the ability to discern right from wrong in AI's output.
Zhang Wenhong's remarks, from the perspective of an ordinary AI user, reveal a widely misunderstood reality about skills and leverage in the AI era.
Over the past year or two, I've observed a peculiar "collective anxiety."
Interestingly, this anxiety doesn't come from those who don't understand technology; on the contrary, it comes more from elite groups who are already proficient in using AI: programmers, lawyers, analysts, and self-media creators.
Everyone was initially excited, thinking AI would turn them into superhumans. But after a brief efficiency狂欢 (carnival), many fell into a deeper sense of powerlessness:
When AI can complete 80% of the work at zero cost, can my remaining 20% of value uphold my professional dignity?
If an AI can handle code that takes me two weeks in minutes; if a large model can instantly produce a perfect due diligence report; if Gemini or Doubao can allow people with no painting foundation to produce master-level artwork; if GPT can "accurately" read medical examination or inspection reports, then where exactly is the moat of human skills?
Previously, The Atlantic published an article saying we are entering an era of deskilling; but the other side of the coin is precisely this: AI hasn't made skills useless; it has triggered a severe "skill inflation." It's just that skills need to be redefined.
In an era where execution costs approach zero, AI is a mirror. It amplifies not only your efficiency but also the granularity or precision of your cognition.
You feel "useless" probably because AI mercilessly exposes a fact: most of the work you were proud of in the past was just "moving bricks," execution, "obeying and doing," not "thinking," let alone proposing and solving problems.
The truth about 21st-century skills is no longer about how many tools you have in hand, but about how much genuine leverage you have in your mind. The comprehensive ability of "macro control + micro verification" is the real iron rice bowl in the AI era.
I. Zhang Wenhong's Paradox: 10 times 0 is still 0
There's a widely circulated view in Silicon Valley that is often misinterpreted.
People say: "AI is a 10x amplifier of productivity."
The mathematical meaning of this sentence is more冷酷 (ruthless) than its literal meaning.
If your current ability is 1, AI turns you into 10; if you are 10, AI turns you into 100. But if your underlying understanding of a certain field is 0, then 0 multiplied by 10 is still 0.
This is the core of Zhang Wenhong's concern: a young doctor who relies on AI from the internship stage, his clinical judgment might be 0. No matter how powerful AI is, 0 multiplied by any number is still 0.
Even more frightening, this "0" doesn't even know it's 0.
Zhang Wenhong was very blunt: "New doctors cannot only know how to rely on AI for diagnosis." Why? Because even if AI's accuracy is as high as 95%, that 5% error needs to be identified and corrected by professional doctors.
If the doctor doesn't possess independent diagnostic ability at all, how can he discover AI's errors? How can he handle difficult and complicated cases that AI cannot handle?
This is what I call the "Zhang Wenhong Paradox." On one level, it's a chicken-or-egg problem. But on another level, it emphasizes whether the human is using the tool or the tool is using the human.
It reveals the first layer of truth about skills in the AI era:
The essence of AI is "probability fitting," while human value lies in "consequence bearing."
In the past, the skills we talked about often referred to proficient execution, mastering grammar, memorizing legal provisions, mastering various shortcuts. But in the AI era, these hard skills are rapidly depreciating, becoming infrastructure.
Replacing them is a more hidden, scarcer ability: judgment. And so-called judgment" is knowing the long-term consequences of one's actions.
Imagine a scenario: a senior engineer and a novice both use AI to write code.
The novice gets just code blocks. He cannot judge whether this code has architectural hidden dangers, cannot predict its performance under extreme concurrency, and doesn't even know if this is a "dead end" solution.
The senior engineer sees not code, but a path. He knows what tasks to give AI, knows how to accept the results, and更 knows which link to correct when AI makes a mistake, and AI will definitely make mistakes.
For the novice, AI is a black box, and he can only pray it outputs the correct answer. For the expert, AI is an intern team with infinite energy, pointing where to hit.
Thus, the future divide between experts and ordinary people lies in whether you possess the ability to "verify AI output."
Zhang Wenhong can see at a glance where the AI's diagnosis is wrong, not by some mysterious intuition, but by the "meta-ability" accumulated over thirty years of clinical experience. This ability is precisely what young doctors who skip training with AI lack the most.
Therefore, without deep professional knowledge as ballast, AI brings not efficiency, but expensive chaos.
II. Why Are Your Prompts Always "A Bit Off"?
Why can some people use AI to solve complex problems, while others can only use it as a chat robot?
The problem is not that you can't write "spells," but that the entropy of your thinking is too high.
Recently, there is a very alarming phenomenon: people are starting to outsource thinking itself to AI.
Encountering a problem, without breaking it down, directly throwing a paste-like demand to the model, and then getting angry at the mediocre output: "This AI is simply useless."
Actually, it's not that AI is stupid; it's that you haven't thought clearly.
No matter how advanced the AI model is, it is essentially a prediction machine based on "context." Its output quality is strictly limited by the quality of the context you input. This is the modern version of "Garbage In, Garbage Out."
The top skill of the 21st century has become "clear expression" and "structured thinking."
True masters, before opening the dialog box, have already completed a rigorous deduction in their minds:
1. Define the problem: What core contradiction am I trying to solve?
2. Deconstruct the logic: What subtasks constitute this big problem? What are the dependencies?
3. Set standards: What kind of result is considered qualified?
For example, before letting AI assist in developing a function, have you clarified the data flow? Before letting AI write an article, have you constructed a unique观点框架 (viewpoint framework)?
Don't expect AI to complete the "0 to 1" thinking for you.
What AI is good at is actually filling in the flesh and blood (from 1 to 100), but that "1," that core insight, the logical skeleton, must be provided by you.
If you cannot clearly explain your想法 (idea) to a human colleague, you will never get satisfactory results from AI.
Clear writing is clear thinking.
In the future, programming in natural language will be a universal skill. But this does not mean programming has become simpler; it means the precision of language and logic has become the new code.
If your thinking is混乱 (chaotic), AI will only efficiently amplify this chaos.
III. Breaking Out of the Information Cocoon: Closer to the Essence Than 99% of People
Since AI is trained on the massive amount of existing human data, it inherently carries a huge flaw: mediocre consensus, i.e., regression to the mean.
You ask AI for opinions on health, finance, or history, and it will most likely give you a "textbook" answer. These answers are safe, correct, but often extremely mediocre because they merely repeat the information with the highest frequency on the Internet.
This leads to the third dimension: the insight to distinguish truth from falsehood.
Knowledge and Understanding are two different things.
- Knowledge is knowing "one should do it this way";
- Understanding is knowing "why one should do it this way, and when not to do it this way."
This is precisely the fundamental gap between Zhang Wenhong and young doctors.
Young doctors can instantly obtain "knowledge" through AI, such as diagnostic results, medication suggestions, treatment plans. But Zhang Wenhong possesses "understanding": he knows the boundaries of this knowledge, under what circumstances to break the routine, when the "standard answer" given by AI is wrong.
In this era of information overload, if you only acquire information through cramming education and algorithm recommendations, you are essentially mechanically repeating in a huge "echo chamber." You don't truly understand the operating mechanism of things.
To be smarter than AI, we need to be closer to the essence of things (first principles) than 99% of people.
- Want to understand business? Don't just read bestsellers and public accounts, study cash flow, leverage, supply and demand, and human greed.
- Want to understand health? Don't just believe so-called authoritative guidelines, study the biological mechanisms of metabolism, hormones, and inflammatory responses.
When AI gives you a "standardized suggestion," only those who truly understand the operation of the underlying system can敏锐地 (keenly) discover the flaws in it, or decisively overturn AI's suggestion in special situations.
As Zhang Wenhong said: Whether you will be misled by AI depends on whether your own ability is stronger than AI's. And you can't compare knowledge with AI, only understanding.
The future competitive advantage belongs to those who dare to question the "training data." You need to build your own cognitive system. This system is not copied; it is verified by you through practice, through painful feedback loops, through independent thinking.
AI is the average of all human knowledge. If you want to surpass the average, you cannot rely solely on AI; you must possess unique insights that AI cannot derive through statistical probability.
IV. After Execution Value Returns to Zero: From Doer to Acceptor
Taking a long-term view, history may not repeat itself, but it often rhymes.
In the 1980s, the popularization of computers once panicked accountants and lawyers at that time. Before this, lawyers had to search for days in piles of files to find a precedent. The emergence of electronic retrieval technology instantly turned this work into a matter of seconds.
Did lawyers become unemployed? No. On the contrary, the legal industry became larger and more complex.
Because retrieval became easy, clients' expectations of lawyers also increased. People no longer paid for "finding precedents," but for "constructing unique defense strategies based on complex precedents."
Similarly, when AI takes over code writing, copy generation, and basic diagnosis, the human role is undergoing an essential leap:
We are evolving from "craftsmen" to "commanders"; upgrading from "doers" to "acceptors."
In the past, an excellent engineer might need 50% of the time to write code and 50% to think about architecture. Now, he can use 90% of the time to think about architecture, understand business, optimize experience, and leave the coding work to AI (and he will review it).
This means the upper limit of work complexity is opened up.
Independent developers can now run a company that originally required a team of ten people alone; a knowledgeable self-media creator can produce a week's worth of content in a day; a senior doctor (like Zhang Wenhong) can handle a volume of cases that was previously unimaginable with AI assistance.
This is the new definition of "skill" in the AI era:
It is no longer a single-dimensional "specialization," but a cross-dimensional integration ability.
You don't need to lay every brick yourself, but you must know the mechanical structure of the building, must have aesthetic ability to decide the appearance of the building, must have business acumen to decide where the building is most valuable to build.
This comprehensive ability of "macro control + micro verification" is the real iron rice bowl in the AI era.
The two key abilities emphasized by Zhang Wenhong essentially mean this:
1. Judging the accuracy of AI diagnosis (micro verification)
2. Diagnosing and treating difficult and complicated cases that AI cannot handle (macro control)
Doctors without these two abilities can only be considered "AI operators."
Conclusion: Only by Ascending Dimensions Can You Enjoy the Thrill of Dimensionality Reduction Strikes
Returning to the phenomenon mentioned at the beginning: Why do I feel more useless the more I use AI?
Because AI剥夺 (deprives) you of the right to gain a sense of achievement through "hard labor."
Before, you spent three days organizing a beautiful report and felt very valuable; now, AI can do it in three seconds, and this illusory sense of value instantly collapses.
This is indeed painful, but it is also an awakening.
AI forces us to face that most difficult question: Besides mechanical execution, where is my true intellectual value?
For those unwilling to think, this is the worst of times. They will completely become appendages of the algorithm,甚至 (even) unable to perceive that they are being swallowed by a mediocre information cocoon.
But for those full of curiosity, possessing independent thinking ability, and eager to explore the essence of things, this is the best era in human history:
- All thresholds are lowered.
- All ceilings have disappeared.
- You possess the most powerful think tank and execution team in human history, on call 24/7.
Zhang Wenhong is not against AI; he is against using AI directly without building underlying abilities, outsourcing thinking and meta-cognition to AI.
He himself uses AI extensively because he has thirty years of internal skill as a foundation. AI is like adding wings to a tiger for him; but for young doctors without internal skill, AI might be揠苗助长 (pulling up seedlings to help them grow),饮鸩止渴 (drinking poison to quench thirst).
In the 21st century, skills will not disappear, but they will undergo a brutal purification.
Don't try to compete with AI in "solving problems"; compete with AI in "posing problems."
When you no longer regard AI as a tool to help you be lazy, but as a super leverage that requires your extremely high intelligence to驾驭 (steer), guide, and correct errors,
What you see through AI is no longer your mediocre self, but an infinitely放大 (magnified),强悍 (formidable) super individual.






