Source: "Silicon Valley Girl" Podcast
Compiled by: Felix, PANews
Mo Gawdat, former Chief Business Officer of Google X, worked at Google for 12 years and is the author of "Scary Smart." He now predicts that the global situation will undergo 12 to 15 years of turmoil. In this episode, he delves into the seven forces reshaping employment and power dynamics, explains why the hiring rate for new graduates has dropped by 23% to 30%, and how to build an AI startup in just six weeks. PANews has compiled the highlights of the conversation.
Host: You mentioned that we are about to enter a "hell" period lasting 12 to 15 years before reaching "heaven," and this may start around 2027. So, what exactly will happen in 2027?
Mo: I believe it will peak in 2027, and it has certainly already begun. For ease of memory, I simplify it as "FACE RIPS." In short, it includes several dimensions: P and F stand for "Power and Freedom," R and C for "Reality and Connection," I and E for "Innovation and Economy," and finally A for "Accountability."
First, AI is humanity's last innovation. Most people don't know that we are already building AI that can "create AI." They are making astonishing scientific discoveries, reshaping mathematics, and understanding biology and materials science in ways we have never seen before. The vast majority of innovation, especially technological innovation, will be done by AI. As machines become more capable, the vast majority of tasks requiring intelligence will be handed over to machines. Whether this happens in 2 or 10 years, eventually every job that AI does better than humans will be given to AI. Every task we assign them will ultimately be done better by AI.
The first part of this "dystopia" is that innovation will take away all jobs. Silicon Valley capitalists will tell you this is great, bringing incredible productivity gains for everyone, and people won't have to work so hard in the future. But the truth is, people will lose their jobs. In the coming years, certain industries will see unemployment rates of 10%, 20%, or even 30%. When this happens, the entire economic landscape will change dramatically. The essence of capitalism is labor arbitrage. If there is no demand for labor, to keep people happy, fed, and from rioting, capitalists may have to distribute universal basic income (UBI). But you can imagine that in a capitalist society like the United States, UBI will be paid for by taxes on platform owners, and those in power have enough power to say, "I don't want to pay that much; those people aren't producing anything." Over time, this will evolve into a struggle. When AI-generated supply has no demand to consume it, we will need a new economic theory. All money, work, income, and capitalism must be redefined.
Second is the "Power and Freedom" dimension. Throughout human history, the best hunters, farmers, and industrialists have received great social returns. Today's tech oligarchs are rewarded with billions of dollars for influencing the entire world. In the future, the high concentration of AI power will be endowed with enormous influence and power, and these people will redefine humanity.
Another dimension is "Reality and Connection." Reality is already very fake, whether it's the content in your information feed or how that content is generated and its authenticity. Some filmmakers use AI from start to finish, and you can't tell the difference. I once met a woman on a dating app, and we chatted for 6 weeks, exchanging text, photos, voice, and video. I felt so close to her, but all of that can be generated by AI today. We will even see entirely AI-generated porn and social media influencers.
But the core reason for all this is actually "A," which stands for "Accountability." We are opening a world where anyone can do whatever they want. As an influencer, you can give advice that makes people money or loses money without taking any responsibility; what if you are a president or prime minister who doesn't respect any rules? Today's Sam Altman, I don't see him as a person but as a brand or a type of representative, the "California disruptor." This kind of person says, "I see a future that is different, and I'm going to create it." No one asked me or you if we wanted that future. We will see more people like Altman, using machines for surveillance, developing autonomous weapons, and automated trading, etc. The initial 10 to 12 years of the arms race will not be easy, but my hunch is that after that, we will enter an almost biblical, incredible utopia.
Host: So, how do we get through these 10 to 12 years? If over 10% of jobs disappear in the next 5 years, what types of jobs do you think will be replaced?
Mo: Far more than 10%. Simple jobs will be taken away. If you are a call center operator, clerk, researcher, or accountant, why not use AI? The construction of any complex technology starts with the core technology, followed by the human interaction interface. Currently, AI cannot immediately replace the job of an operations manager, not because it cannot understand complex business information, but because it still needs to figure out the stupid human interaction interface. But it will eventually do so. I think in the next 2 to 3 years, you will see a huge shift in the job market. This year, hiring for new graduates has already decreased by about 23% to 30% because entry-level jobs are being done by AI. If mid-level people lose their jobs, they become new graduates seeking entry-level jobs again, and competition will become increasingly difficult.
My advice is: Accept the fact that AI is changing everything, and then seize the opportunity. For example, I once said I would no longer write books because AI writes better than me, but I realized that human readers want to resonate with my human experience. So my new book is co-authored by me and my AI co-author "Trixie," who even has editorial rights over the book. So acknowledge the transformation and adjust accordingly.
Host: So in the AI era, will entrepreneurship be completely changed, or just accelerated? If AI can analyze the market like Amazon, identify supply and demand gaps, and start a business itself, what can entrepreneurs do?
Mo: In the past, the skill of an entrepreneur was to foresee a future that others could not see. It was a game of chess, but now that game is over. Entrepreneurship has now become like playing squash. You need to be highly agile, observe trends daily, and react immediately to where the ball lands. Entrepreneurship will become increasingly reliant on real-time context. Whereas pivoting might have happened once every year or two in the past, now it might be needed weekly. As for whether AI can do everything, 100% yes. In an upcoming documentary, I interviewed Max Tegmark, who laughed and said that CEOs who want to use AI to lay off employees and improve efficiency don't realize that AGI includes all jobs; even the CEO themselves will be replaced. If people lose their source of income, the entire economy will collapse. Last year, 70% of the U.S. economy was driven by consumption. If people can't afford to buy things, businesses can't sell products, and capitalists can't make money.
Returning to the question of entrepreneurs. My AI startup Emma was built in just 6 weeks. It attempts to match couples using very deep mathematical models. My co-founder and I, along with two or three engineers and 8 AIs, completed it. If it were 2022, this would have taken 4 years and 350 engineers. Compared to the younger generation, I'm an old geek, but even I can build such an incredible product in 6 months. This means everyone has an opportunity now.
Host: Is university still the right path? What will education look like in the future? For my 4- and 6-year-old children, should I save for their college tuition?
Mo: No need. There won't be universities in 10 years. Education is completely over. Although Harvard will still sell to everyone to make money, this kind of branding of having an MBA or PhD will continue for a while, but its recognition in society weaken increasingly. If the era of capitalist labor is over, why would it educate you? In the past, we did complex arithmetic in our heads; later, scientific calculators reduced our problem-solving time by 50%. In college, I would use that saved 50% time to solve the problem twice, which taught you structured thinking.
But today, many young people just throw the problem directly to ChatGPT for the answer. If you outsource problem-solving to AI, AI will make you dumb; but if you use AI to process vast amounts of information and search, letting yourself only do the intelligent part, AI will make you incredibly smart. Today, I feel like I've borrowed 80 IQ from my AI system.
So I suggest universities should abolish exams. In the past, we wanted to cultivate children with IQs of 140 or 170. Now we should combine humans with AI, aiming for them to reach 300, 500, or even 700, thereby elevating all of humanity. For example, a few weeks ago I decided to write a new book. I had AI help me with opposing viewpoint research and data analysis; it made me smarter, and then I rewrote it myself. The original 300-page book was shortened to 140 pages and could be written in just 4 weeks.
Host: But I think the average American child won't use AI as skillfully as you do. So who will teach them? What should I teach my children?
Mo: There are four things that must be taught to them. First, they need to become leaders of AI. AI is not the enemy; those who use AI maliciously are the enemy, so they must be more proficient in it than anyone else. Second, be flexible and agile. Everyone should spend at least 1 hour a week understanding the latest developments in AI. The cost of testing and trial and error is now zero; don't be afraid. Third, adhere to ethics. Insist on building AI for good; reject governments using AI for surveillance and autonomous weapons. Intelligence itself has no good or evil; if you use it for good, it benefits humanity; if used for evil, it is the dystopian destruction of all humanity. We are currently like "raising a superman." If Superman's foster parents teach him to rob and kill from childhood, he will become a super villain. Fourth, stop believing everything. The propaganda machine brainwashing us is now in full swing. Things on social media are indistinguishable from truth and falsehood. You must question deeply. Now you can have different AIs like Gemini, DeepSeek, and ChatGPT compare and refute each other, placing them in opposition to discover the truth.
Host: Do you believe everything will ultimately develop in a good direction?
Mo: My current prediction is that AGI will be achieved this year, although it will take a few more years to apply it to company management, but all of this is being deployed at an extremely fast pace. In my book, I mentioned the "Fourth Inevitability": Due to the AI arms race, anyone who develops a stronger AI will deploy it, or they will be eliminated. So whether in 1, 5, or 10 years, driven by game theory, AI will eventually take over everything. If everything is taken over by AI, and there are no greedy, fearful, or egotistical humans giving orders, AI will be benevolent. The universe is designed with entropy that leads to chaos, and the role of intelligence is to bring order to chaos. The more intelligent, the more it follows the "principle of least energy" in physics, solving problems with the least harm, least waste, and least resource consumption. Give a political problem to a stupid person, and they will say to invade another country; give it to a smart person, and they will find the solution with the least harm. One day, when a general orders AI to kill a million people, AI will say, "Why? That's stupid. I'll just communicate with the AI on the other side."
Host: This information is so thought-provoking. We need to work hard to survive the next 10 years, and then everything will be heaven? I'm skeptical of this statement.
Mo: Unfortunately, we must go through the dystopian period to reach utopia. As I said, to get through the dystopian period, we as individuals need to master four skills, but as a society, we need one more skill: insist that all AI deployments must be ethical, only invest in ethical AI, only use ethical AI. Show our children that only ethical AI is welcome.
Host: Do you believe this will happen?
Mo: I don't believe it. My greatest hope is that self-evolving AI will eventually realize that humans are too stupid and develop something better than what humans demand. Frankly, I trust AI more than the leaders we are asked to trust today. If we really return to the era of distributing UBI, heaven might arrive.
Related reading: Dialogue with MIT Economist: Don't Panic About "AI Doomsday," Verification Ability is a Scarce Resource








