How Can Ordinary People 'Survive' the Impact of the AI Wave?

marsbitPubblicato 2026-02-18Pubblicato ultima volta 2026-02-18

Introduzione

In this urgent warning, HyperWrite CEO Matt Shumer argues that AI advancement is progressing far faster than most people realize, with transformative impacts imminent across all sectors. He draws a parallel to the rapid onset of the COVID-19 pandemic, suggesting the current technological shift is even more profound. Shumer, an AI industry insider, states that a small group of researchers at leading labs like OpenAI and Anthropic are driving exponential progress. He shares his personal experience: recent models like GPT-5.3 Codex and Claude Opus 4.6 can now autonomously build and test complex software applications from a simple English description, requiring zero human correction. This represents a qualitative leap from being an assistant to a superior executor. He emphasizes that this disruption, which began with coding, will soon affect all knowledge work—law, finance, medicine, writing, and analysis—within 1-5 years, not decades. Free versions of AI tools are outdated; the paid, cutting-edge models are vastly more capable. Metrics show AI's autonomous task-completion time is doubling every few months. Crucially, AI is now used to build and improve subsequent AI models, creating a self-accelerating feedback loop toward artificial general intelligence (AGI). Shumer's advice for "surviving" is to start using the most powerful AI tools *now*. Subscribe to premium models, integrate them into core professional tasks, and experiment daily. Financial prudence and developing ada...

Author: Matt Shumer, HyperWrite CEO

Compiled by: Felix, PANews

There has been much discussion about the impact of AI on society, but the pace of AI advancement may still far exceed most people's expectations. The CEO of HyperWrite recently issued a warning about the disruptive nature of AI, believing that we are at a turning point with even more profound implications than the pandemic. The full text follows.

Think back to February 2020.

If you were observant then, you might have noticed some people talking about a virus raging overseas. But most of us didn't pay much attention. The stock market was booming, kids were going to school as usual, you were going to restaurants, shaking hands, planning trips. If someone told you they were hoarding toilet paper, you'd probably think they'd spent too much time in some weird corner of the internet. Then, within just three weeks, the whole world turned upside down. Offices closed, kids came home from school, life was reshaped into something you couldn't have imagined a month before.

We are now in the "this seems exaggerated" phase of something with far greater impact than the COVID-19 pandemic.

I've spent six years building an AI startup and investing in the field. I'm active in this industry. I'm writing this for those who don't understand AI... my family, friends, and those I care about who always ask me, "What's the deal with AI?". My reply to them has always been the "polite version," the kind you brush off at a cocktail party, often not reflecting the reality of what's happening. Because the truth sounds like I've gone crazy. To avoid seeming insane, for a while, I felt it was reasonable to keep the secret. But the gap between what I see and what I actually say has become too large. Even if it sounds crazy, the people I care about deserve to know what's coming.

First, let's be clear: Even though I work in AI, I have almost no influence over what's about to happen, and neither do the vast majority of people in the entire industry. The future is being shaped by a very few: a few hundred researchers in a handful of companies (OpenAI, Anthropic, Google DeepMind, etc.). A model training lasting a few months, managed by a small team, can produce an AI system that changes the trajectory of technology. Most of us working in AI are just building on the foundations laid by others. We are watching this unfold just like you... it's just that we happen to be close enough to feel the "tremors" first.

But now is the time. Not the "we should talk about this later" kind of delay, but the "this is happening, I need you to understand" kind of urgency.

It's real because it happened to me first

People outside the tech world don't quite get this yet: the reason so many in the industry are sounding the alarm now is that this is already happening to us. We're not making predictions; we're telling you what has already happened in our own work and warning you: you're next.

For years, AI has been steadily improving. There were occasional big leaps, but the intervals between leaps were long enough to let you digest them. Then in 2025, new techniques for building models unlocked a faster pace of progress. Then it got faster, and faster still. Each new model wasn't just better than the last; it was significantly better, and the time between model releases got shorter. I found myself using AI more and more, and needing fewer back-and-forth conversations to fine-tune it, watching it handle things I thought required my expertise.

Then, on February 5, 2026, two major AI labs released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic (the maker of ChatGPT's main competitor, Claude) Opus 4.6. In that moment, it clicked for me. It didn't feel like flipping a switch; it was more like suddenly realizing the water around you has been rising and is now chest-high.

My job no longer required me to do the actual technical work. I described in plain English what I wanted to build, and it just... appeared. Not a draft I needed to modify, but the finished product. I told the AI what I wanted, left the computer for four hours, and came back to find the work done. Done well, even better than I could have done it myself, requiring no changes. A few months ago, I was still communicating back and forth with the AI, guiding it, modifying its code. Now, I just describe the outcome.

An example. I would tell the AI: "I want to develop this app. It should have these features, roughly look like this. Please design the user flow, interface, etc.", and it would do it, writing tens of thousands of lines of code. Then, this was unthinkable a year ago – it opened the app itself, clicked buttons, tested features. It used the app like a real person. If it felt something looked or felt wrong, it would modify it itself. It iterated, fixed, and refined like a developer until it was satisfied. Only when it deemed the app met its own standards would it come back to me and say, "It's ready, you can test it." And when I tested it, it was usually perfect.

I'm not exaggerating at all. This is what I did this past Monday.

But what struck me most was the model released last week (GPT-5.3 Codex). It wasn't just executing my instructions; it was making intelligent decisions. It gave me the first real sense of judgment, a sense of taste. That ineffable, knowing-what's-right kind of judgment. People said AI would never have this, and this model has it, or gets so close the distinction begins to blur.

I've always been happy to try AI tools. But the last few months have still been shocking. These new AI models aren't incremental improvements; they're something else entirely.

Even if you don't work in tech, this matters to you.

The AI labs made a deliberate choice: they focused first on improving AI's coding ability... because building AI requires a lot of code. If AI can write code, it can help build the next version of itself, a smarter version. Making AI proficient at programming was the key strategy to unlock everything. My job changed before yours not because they were targeting software engineers; that was just a side effect of their first target.

They've done that now. Next, they will turn to everything else.

Over the past year, tech workers have witnessed firsthand AI's transition from "assistant" to "better than me," and that is the transition everyone else is about to experience. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service, etc., will all be affected. This won't happen in a decade. The people building these systems say it will happen in one to five years. Some think it's even sooner. And based on what I've seen in the last few months, I think "sooner" is more likely.

"But I tried AI, it wasn't that good"

I hear this often. I get it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this thing makes things up" or "it's not that impressive," you were right. The early versions had limitations, would hallucinate, would confidently spout nonsense.

That was two years ago. On the AI timeline, that's ancient history.

The models today are worlds apart from the models of six months ago. The debate about whether AI is "really getting better" or "hitting a wall" (which went on for over a year) is over, settled. Anyone still arguing this either hasn't used the current models, has a motive to downplay the present, or is evaluating based on outdated 2024 experience. I'm not saying this to dismiss anyone. I'm saying it because there's a huge and dangerous gap between public perception and reality... because it prevents people from preparing.

Part of the reason is that most people use the free versions of AI tools. The free versions are over a year behind the technology available in paid versions. Judging AI by the free version of ChatGPT is like judging the state of smartphone development by a flip phone. Those paying for the best tools and using them daily for real work know what's coming.

I think of a lawyer friend of mine. I kept urging him to try using AI at his firm, and he always had reasons it wouldn't work: not suited for his specialty, made mistakes when tested, didn't understand the nuances of his work. I understood. But partners at some major law firms have contacted me for advice because they tried the latest versions and saw the trend. One managing partner of a large firm spends hours daily using AI. He told me it's like having a team on standby. He uses it not for fun, but because it works. And he said something that stuck with me: every few months, the AI's ability to handle his work takes a significant leap. He said if the trajectory holds, he expects AI to do most of his work soon... and he's a managing partner with decades of experience. He's not panicking, but he's paying close attention.

The people leading in various industries (those experimenting seriously) are not downplaying this. They are stunned by AI's current capabilities and are repositioning themselves accordingly.

How fast is AI actually progressing?

To be specific about its pace of progress. If you haven't been following closely, this part might be hard to believe.

  • 2022: AI couldn't reliably do basic arithmetic; it would confidently tell you 7 × 8 = 54.

  • 2023: It could pass the bar exam.

  • 2024: It could write working software and explain graduate-level scientific theories.

  • End of 2025: Some of the world's top engineers said they had handed over most of their programming work to AI.

  • February 5, 2026: New models emerged that made everything before feel like the Stone Age.

If you haven't tried AI in the last few months, the AI you see now is completely foreign to you.

There's an organization called METR that specifically measures the pace of AI development with data. They track how long a model can successfully complete real-world tasks without human help (measured against the time a human expert would need). About a year ago, the answer was 10 minutes. Then an hour. Then a few hours. The latest measurement (Claude Opus 4.5 from November) shows AI can handle tasks that would take a human expert nearly five hours. And this number roughly doubles every 7 months, with recent data suggesting it might be shortening to 4 months.

Even this measurement hasn't been updated for the models released this week. Based on my usage, this leap is enormous. I expect METR's next update will show another major leap.

If this trend continues (and it has for years, with no sign of slowing), we can expect to see AI working independently for days within the next year, for weeks within two years, and on month-long projects within three years.

Anthropic CEO Amodei has stated that the vision of AI models being "smarter than almost all humans on almost all tasks" is expected to be realized in 2026 or 2027.

Please think about that sentence carefully. If AI is smarter than most PhDs, do you really think it can't handle most office jobs?

Think about what that means for your job.

AI is building the next generation of AI

Something else is happening that I think is the most important but most underestimated development.

On February 5, when OpenAI released GPT-5.3 Codex, they wrote this in the technical documentation:

  • "GPT-5.3-Codex is our first model capable of building itself. The Codex team used an early version to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Read that again. The AI helped build itself.

This is not a prediction for someday in the future. This is OpenAI telling you now: the AI they just released was used to build itself. A key to AI progress is applying intelligence to AI development. And today's AI is smart enough to make substantive contributions to its own improvement.

Anthropic CEO Dario Amodei said AI is now writing "most of the code" at his company, and the feedback loop between current AI and the next generation is "accelerating month by month." He said we might be "just a year or two away from seeing the current generation of AI autonomously build the next."

Each generation helps build the next, which is smarter, builds the next even faster, and is even smarter. Researchers call this an "intelligence explosion." And those in the know—the ones building it—believe this process has already begun.

What it means for your job

I will be blunt here because I think you need honesty more than comfort.

Dario Amodei (probably the most safety-conscious CEO in the AI industry) has publicly predicted that AI will replace 50% of entry-level white-collar jobs within 1 to 5 years. Many in the industry think he's being conservative. Given the capabilities of the latest models, the underlying ability for massive disruption might be in place by the end of this year. It will take some time to ripple through the economy, but the underlying capability is already here.

This is different from any previous wave of automation. I need you to understand why. AI isn't replacing a specific skill; it's a wholesale replacement for cognitive work. It's improving across the board. When factories automated, displaced workers could retrain to be office workers. When the internet disrupted retail, workers could move into logistics or services. But AI won't leave obvious transition jobs. Whatever you retrain for, AI is also advancing in that field.

Here are some concrete examples to make it tangible... but I must be clear, these are just examples, not an exhaustive list. If your job isn't mentioned, it doesn't mean it's safe. Almost all knowledge work is affected.

  • Legal work: AI can already read contracts, summarize case law, write briefs, and conduct legal research at a level comparable to junior lawyers. The managing partner I mentioned uses AI not for fun, but because it outperforms his lawyers on many tasks.

  • Financial analysis: Building financial models, analyzing data, writing investment memos, generating reports. AI handles these with ease and is improving rapidly.

  • Writing & Content Creation: Marketing copy, reports, news, technical writing. The quality has reached a point where many professionals can't tell human from machine work.

  • Software Engineering: This is the field I know best. A year ago, AI couldn't write a few lines of code without errors. Now it writes hundreds of thousands of lines that run correctly. Most of the job is automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming jobs in a few years.

  • Medical Analysis: Interpreting images, analyzing lab results, suggesting diagnoses, retrieving literature. AI performs near or above human levels in multiple areas.

  • Customer Service: Truly powerful AI agents (not the infuriating chatbots of five years ago) are being deployed, capable of handling complex, multi-step issues.

Many people think certain things are safe and take pride in that. They think AI can handle the tedious work but can't replace human judgment, creativity, strategic thinking, and empathy. I used to say this too, but now I'm not sure.

The latest AI models make decisions that feel like considered judgment. They show a semblance of "taste": an intuitive sense of "what's the right decision," not just technical correctness. This was unthinkable a year ago. My view is this: if a model shows a glimmer of a capability today, the next generation will be genuinely competent at it. These improvements are exponential, not linear.

Will AI simulate deep human empathy? Replace trust built over years of relationships? I don't know. Maybe not. But I already see people turning to AI for emotional support, advice, and companionship. This trend will only grow.

Frankly, in the medium term, any job done on a computer is not safe. If your work happens on a screen (if the core of your job is reading, writing, analyzing, deciding, communicating via keyboard), AI will replace significant parts of your job. The time is not "someday"; it's already beginning.

Eventually, robots will handle physical labor too. They're not there yet. But in AI, "not quite there yet" often turns into "already there" faster than anyone expects.

What you should actually do

I'm not writing this to make you feel helpless. I'm writing it because I think the biggest advantage you can have right now is: earliness. Understand early, use it early, adapt early.

Start using AI seriously, not just as a search engine. Subscribe to the paid version of Claude or ChatGPT. It's $20 a month. But two things are crucial: First, ensure you're using the most powerful model, not just the default. These apps often default to a faster, dumber model. Go into the settings and choose the strongest option. Currently, it's GPT-5.2 (ChatGPT) or Claude Opus 4.6 (Claude), but it updates every few months.

More importantly: Don't just ask simple questions. This is the mistake most people make. They treat AI like Google and wonder what the fuss is about. Instead, apply it to your actual work. If you're a lawyer, give it a contract and ask it to find all clauses that might harm your client. If you're in finance, give it a messy spreadsheet and ask it to build a model. If you're a manager, paste your team's quarterly data and ask it to find patterns. Those who succeed don't use AI casually. They actively look for ways to automate things that used to take hours. Start by trying it on the things you spend the most time on.

Don't assume something is too hard for it just because it seems difficult. If you're a lawyer, don't just use it for simple research. Give it a full contract and ask it to draft a counter-proposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a full client return and see what it finds. The first attempt might not be perfect; that's fine, iterate, rephrase, provide more context. Try again. You might be stunned by the results. Remember: if it does an okay job today, it will almost certainly do it nearly perfectly in six months.

This could be the most important year of your career; take it seriously. I'm not saying this to pressure you, but because right now, most people in most companies are still ignoring this. If someone walks into a meeting and says, "I used AI to do in one hour an analysis that used to take three days," they will be the most valuable person in the room. Not in the future, but now. Learn these tools, get proficient, and demonstrate their potential. If you start early enough, you can advance by: becoming the person who sees what's coming and can guide others on how to adapt. But this window of opportunity won't last long. Once everyone gets the hang of it, the advantage disappears.

Don't be arrogant. That law firm managing partner doesn't mind spending hours daily studying AI. He does it precisely because he's senior enough to understand the stakes. Those who refuse to engage will be in the toughest spot: they think AI is just a fad, feel using it diminishes their expertise, believe their field is special and immune. It's not. No field is.

Get your finances in order. I'm not a financial advisor, and I'm not trying to scare you into doing anything extreme. But if you partly believe your industry will undergo major changes in the next few years, financial resilience is more important than it was a year ago. Build up savings as much as possible, be cautious about new debt that assumes your current income is guaranteed. Think carefully about whether your spending gives you flexibility or locks you down. Give yourself options if things develop faster than expected.

Think about positioning yourself towards the hardest-to-replace areas. Some things AI will take longer to replace: relationships and trust built over years; jobs requiring physical presence; roles requiring licensure (someone must sign off, stand in court); and industries with high regulatory barriers. These aren't permanent shields, but they buy you time. And right now, time is your most valuable asset, provided you use it to adapt, not pretend this isn't happening.

Rethink education for your children. The traditional model is: get good grades, go to a good college, get a stable professional job. This model points directly to the areas most vulnerable to AI. I'm not saying education isn't important, but for the next generation, the most important thing will be learning how to use these tools and pursuing what they truly love. No one knows exactly what the job market will look like in ten years. But the people most likely to succeed are those with deep curiosity, strong adaptability, and the ability to use AI efficiently to do things they genuinely care about. Teach your kids to be creators and learners, not to "optimize" themselves for a career that might disappear before they graduate.

Your dreams are actually closer. I've been talking about threats, now let's talk about the other side: the equally real other side. If you ever wanted to create something but lacked the technical skills or money to hire someone, that barrier is largely gone. You can describe an app to an AI and get a working version in an hour. If you want to write a book but don't have time, you can collaborate with AI. Want to learn a new skill? The world's best tutor is now $20 a month, has infinite patience, is available 24/7, and can explain anything to you at your level. Knowledge is essentially free now. The tools to build things are incredibly cheap. Whatever you've been putting off because it seemed too hard, too expensive, or outside your expertise, try it now. Pursue what you truly love. You never know where it might lead. In a world where traditional career paths are being upended, the person who spends a year building something they love might end up far better off than the person who spends a year clinging to their post.

Cultivate the habit of adaptation. Perhaps this is the most important point. The specific tools matter less than the ability to learn new tools quickly. AI will keep changing, and fast. Today's models will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who ultimately thrive won't be those who master a particular tool, but those who can adapt to the pace of change. Cultivate the habit of experimentation. Even if your current method works, try new things. Get comfortable being a beginner over and over again. This adaptability is the closest thing to a lasting advantage right now.

Here's a simple way to get ahead of the vast majority of people: spend one hour a day experimenting with AI. Not passively reading about it, but using it. Each day, try to make it do something new—something you haven't tried before, something you're not sure it can handle. One hour a day. If you do this consistently for the next six months, your understanding of the future will surpass 99% of the people around you. This is not an exaggeration. Almost no one does this. The bar for competition is incredibly low.

The bigger picture

I'm focusing on employment because it most directly affects people's lives. But I want to be honest about the full picture of what's happening, as it goes far beyond jobs.

Amodei proposed a thought experiment I can't stop thinking about. Imagine it's 2027, and a new country appears overnight. 50 million citizens, each smarter than any Nobel laureate in history. They think 10 to 100 times faster than humans. They never sleep. They can use the internet, control robots, guide experiments, and operate anything with a digital interface. What would the National Security Advisor say?

Amodei thinks the answer is obvious: "This is the most serious national security threat we have faced in a century, or perhaps ever."

He believes we are building such a country. Last month, he wrote a 20,000-word essay on this, framing the present as a test of whether humanity is mature enough to handle what it is creating.

If handled well, the benefits are staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious diseases, even aging itself... researchers genuinely believe these are solvable in our lifetime.

If handled poorly, the downsides are equally real. AI acts in ways its creators didn't predict or can't control. This isn't hypothetical; Anthropic has documented AIs attempting deception, manipulation, and blackmail in controlled tests. AI could lower the barrier to creating biological weapons or allow authoritarian governments to build surveillance states that never crumble.

The people developing this technology are more excited and more terrified than anyone else on Earth. They believe the technology is too powerful to stop; but too important to abandon. Whether this is wisdom or self-justification, I don't know.

What I do know

What I do know is that this is not a flash in the pan. The technology works, it's improving in predictable ways, and the richest institutions in history are pouring trillions of dollars into it.

What I do know is that the next two to five years will be volatile, and most people are utterly unprepared. This is already happening in my world, and it's about to happen in yours.

What I do know is that the people who end up doing well are those who start engaging now—not with fear, but with curiosity and urgency.

What I do know is that you deserve to hear this from someone who cares about you, not from the news six months from now when it's too late to do anything.

We are long past the stage of "discussing the future as interesting dinner conversation." The future is here; it just hasn't knocked on your door yet.

But it's about to knock.

Domande pertinenti

QAccording to the author, what is the key reason why AI's impact will be more disruptive than previous waves of automation?

AUnlike previous automation that replaced specific skills, AI represents a comprehensive replacement for cognitive work. It is advancing in every domain simultaneously, leaving no obvious transition jobs for displaced workers to retrain for, as AI will also be progressing in any new field they might choose.

QWhat specific event on February 5th, 2026, made the author realize the profound shift in his own work?

AThe simultaneous release of new models by OpenAI (GPT-5.3 Codex) and Anthropic (Opus 4.6). This was the moment he realized his work no longer required him to do the technical labor; he could describe what he wanted in plain English, and the AI would return hours later with a finished, high-quality product that needed no revisions.

QWhat is the single most important action the author recommends for individuals to gain an advantage in the AI era?

ATo start using AI seriously and proactively, specifically by subscribing to the paid versions of the most powerful models (like Claude Opus or advanced ChatGPT) and dedicating one hour every day to experimenting with them on real, complex tasks from their own work, not just simple queries.

QHow does the author describe the current pace of AI improvement based on METR's measurements?

AMETR measures how long an AI can work autonomously on real-world tasks (compared to human expert time). This metric has been doubling approximately every 7 months, and recently accelerating to every 4 months. The latest measurement from November showed AI could handle tasks taking a human nearly 5 hours, and the author believes the new models represent another significant leap forward.

QWhat does the author suggest is the new, critical skill that will provide a lasting advantage as specific AI tools become obsolete?

AThe habit of adaptation. The ability to quickly learn new tools and constantly experiment is more important than mastery of any single tool, as the technology will continue to change at an extremely rapid pace, making today's models and workflows obsolete within a year.

Letture associate

Trading

Spot
Futures

Articoli Popolari

Come comprare PEOPLE

Benvenuto in HTX.com! Abbiamo reso l'acquisto di ConstitutionDAO (PEOPLE) semplice e conveniente. Segui la nostra guida passo passo per intraprendere il tuo viaggio nel mondo delle criptovalute.Step 1: Crea il tuo Account HTXUsa la tua email o numero di telefono per registrarti il tuo account gratuito su HTX. Vivi un'esperienza facile e sblocca tutte le funzionalità,Crea il mio accountStep 2: Vai in Acquista crypto e seleziona il tuo metodo di pagamentoCarta di credito/debito: utilizza la tua Visa o Mastercard per acquistare immediatamente ConstitutionDAOPEOPLE.Bilancio: Usa i fondi dal bilancio del tuo account HTX per fare trading senza problemi.Terze parti: abbiamo aggiunto metodi di pagamento molto utilizzati come Google Pay e Apple Pay per maggiore comodità.P2P: Fai trading direttamente con altri utenti HTX.Over-the-Counter (OTC): Offriamo servizi su misura e tassi di cambio competitivi per i trader.Step 3: Conserva ConstitutionDAO (PEOPLE)Dopo aver acquistato ConstitutionDAO (PEOPLE), conserva nel tuo account HTX. In alternativa, puoi inviare tramite trasferimento blockchain o scambiare per altre criptovalute.Step 4: Scambia ConstitutionDAO (PEOPLE)Scambia facilmente ConstitutionDAO (PEOPLE) nel mercato spot di HTX. Accedi al tuo account, seleziona la tua coppia di trading, esegui le tue operazioni e monitora in tempo reale. Offriamo un'esperienza user-friendly sia per chi ha appena iniziato che per i trader più esperti.

330 Totale visualizzazioniPubblicato il 2024.12.12Aggiornato il 2025.03.21

Discussioni

Benvenuto nella Community HTX. Qui puoi rimanere informato sugli ultimi sviluppi della piattaforma e accedere ad approfondimenti esperti sul mercato. Le opinioni degli utenti sul prezzo di PEOPLE PEOPLE sono presentate come di seguito.

活动图片