Written by: jiayi
AI has changed our daily habits—that's a fact.
We use AI to write emails, create PPTs, search for information, and even draft social media posts. We've grown accustomed to AI's presence, as natural as relying on WiFi.
But few pause to consider: Is the AI you're using the same as what others are using?
The "Fairness" of the AI Era Is the Greatest Illusion
Silicon Valley loves to tell a story: AI gives everyone a super assistant, knowledge is no longer a privilege of the few, and equality is achieved.
It sounds beautiful. But the truth is—AI is fundamentally unfair; it's a competition of financial resources.
From chips to computing power, from model training to token consumption, every step of AI burns money.
An NVIDIA H100 chip costs over $25,000. Training a GPT-4-level model costs over a hundred million dollars. Every question you ask an AI burns tokens—and tokens have a price.
Claude Opus charges $5 per million tokens for input, $25 for output. ChatGPT Pro is $200 per month. Add Perplexity, Cursor, Midjourney... A heavy AI user can easily spend over $500 monthly on tools.
Some burn $5,000 a month using AI to build competitive barriers; others use the free version of ChatGPT and think they're keeping up with the times.
This isn't the same race. It's not even the same game.
National Level: The Structural Gap Is Already Irreversible
This logic is even more brutal at the national level.
The AI arms race requires three things: chips, computing power, and talent. All three require massive capital.
The United States alone controls over 70% of the world's AI computing power. China is catching up, but chip restrictions are a chokehold. As for most developing countries—in 46 emerging markets, the cost of basic broadband consumes 40% of monthly income.
When a young person in Nigeria can't even afford stable internet, what "AI equality" can we talk about?
94% of people in high-income countries have internet access, compared to only 23% in low-income countries. 84% of high-income countries have 5G coverage, while only 4% of low-income countries do.
The starting line for third-world countries in the AI era isn't just a step behind—it's not even on the field.
This structural gap can't be closed by effort alone.
Individual Level: Your Ceiling Is Being Redefined by AI
The national-level logic applies equally to individuals.
A line from my Twitter bio: An individual's ceiling = worldview + cognition + practical ability.
What has AI done to these three things?
▶️ First, AI solves many practical efficiency problems.
What used to take a week to produce an industry report now takes a day. What used to require coding from scratch now has AI setting up the framework. In terms of efficiency, AI is indeed leveling the playing field.
▶️ But second, AI is vastly amplifying cognitive gaps.
The same AI tool—what you ask, how you ask it, whether you can judge if the AI's answer is right or wrong—this entirely depends on your existing cognitive level.
A person with deep cognition uses Claude for research; they know what questions to ask, how to follow up, and which answers have flaws verified. AI saves them 80% of execution time, which they use for deeper thinking.
And someone with shallow cognition? They throw a question at AI and use whatever it gives. They turn off their brain and deliver directly. Long-term, they stop thinking. AI doesn't make them smarter; it makes them lazier, dumber.
▶️ Third, the gap in delivery quality will widen dramatically.
Based on your existing cognition to query AI, the depth, accuracy, and timeliness of what AI delivers are exponentially different. Using the same Claude Opus, one person produces deep insights, another produces seemingly plausible nonsense.
A study from Finland's Aalto University is particularly interesting: The more people use AI, the more they tend to overestimate their own abilities. AI makes you "feel" stronger—the output looks professional, fluent. But if you can't discern quality, you're just producing "refined mediocrity."
So the gaps in worldview, cognition, and practical ability—these three dimensions are infinitely magnified in the AI era.
Smart people get smarter, those with cognition deepen it further, the wealthy use better tools to create greater distance. And those on the other end, with AI's "help," become lazier, shallower, poorer.
Cost × Cognition: A Double Divide Is Stacking
Here's a logic chain many haven't figured out:
Money determines what level of AI you can use → The level of AI determines the quality and depth of information you access → The quality of information determines your cognitive boundaries → Cognitive boundaries determine your decision quality → Decision quality determines how much money you can make.
This is a closed loop. The rich get richer, the poor get poorer.
The free version of ChatGPT has a hallucination rate of nearly 40%. Meaning, out of 10 questions you ask, about 4 answers are made up. The paid GPT-4 has a 28% hallucination rate, and the latest version reduced it by another 45%.
The decisions you make using the free version versus using Opus, accumulated over time, lead to completely different life trajectories.
The world will always have huge information gaps. AI didn't eliminate the information gap; AI turned it into a paywall.
Those Who Scale the Wall and Those Who Don't Are Already in Two Different Worlds
Let me share a personal observation that makes me sigh.
If you're reading this article, it's likely because you know how to bypass internet restrictions and browse on Twitter.
But think—how many people around you don't know how to do that? When you talk to them, don't you already feel a clear cognitive gap?
This isn't an IQ gap. It's long-term cognitive divergence caused by information environments.
One person daily接触到的是全球最前沿的信息、最深度的讨论、最优质的内容创作者 (contacts the world's most cutting-edge information, deepest discussions, highest-quality content creators). Another sees algorithm-fed short videos and filtered information streams.
Over five, ten years, these two people's thinking patterns, judgment abilities, and worldviews become completely different.
The AI era magnifies this gap another layer. Those who can bypass restrictions use Claude, Perplexity, the world's best AI tools. Those who can't—ChatGPT is blocked in China, Claude is blocked in China, they can only use localized alternatives or pay premiums through resellers.
The "walls" of the AI era aren't just physical firewalls. There are language walls—cutting-edge AI models are far more optimized for English than other languages. There are paywalls. There are algorithmic filter bubbles. Every wall divides people into different worlds.
Stanford research shows that non-English users consume 5 times the token volume for the same content when using AI. Meaning, for the same money, you get less information, of lower quality.
The Scariest Thing: You've Fallen Behind, and You Don't Know It
This is the point I most want to make in this entire article.
The free AI can also answer questions. It can also help you write. It can also help you search. So people using the free version think—"I'm using AI too, I'm not落后 (falling behind)."
But the free version reasons more shallowly, hallucinates more, has older information. The answers you get "look" right, but are actually full of plausible errors.
It's like two people are "running." One is actually moving forward, the other is running in place on a treadmill. Both feel like they're running, but only one is advancing.
In psychology, there's a concept called the Dunning-Kruger effect: The less people know, the more they think they know. AI magnifies this effect tenfold—the more you rely on AI, the stronger you feel. But you've already lost the ability to think independently; you just don't know it yet.
This is the most brutal part of the AI era.
It's not that AI will replace you. It's that people using better AI, with deeper cognition, will leave you far behind. And you might not even understand how you fell behind by the time you're淘汰 (eliminated).






