This Might Be the Last Chance for Ordinary People to Understand AI in Advance

marsbit2026-02-11 tarihinde yayınlandı2026-02-11 tarihinde güncellendi

Özet

The author, an AI industry insider, warns that AI advancement is undergoing a nonlinear, exponential leap—not gradual improvement. By early 2026, models like GPT-5.3 and Claude Opus 4.6 can autonomously complete complex tasks (e.g., coding full applications, legal analysis, financial modeling) without human intervention, often outperforming professionals. AI is now actively used in its own development, accelerating progress toward artificial general intelligence (AGI), predicted to surpass human capability in most tasks by 2026–2027. This shift threatens 50%+ of white-collar jobs (law, finance, writing, software, etc.) within 1–5 years, as AI replaces cognitive labor universally. The author urges immediate action: use paid, state-of-the-art AI tools (e.g., ChatGPT Plus, Claude Pro) for real work tasks, not just queries; adapt skills toward creativity and AI collaboration; secure finances; and embrace continuous learning. The window to gain advantage is narrow but critical. Beyond work, AI poses existential risks (e.g., security threats) and promises breakthroughs (e.g., curing diseases). The message is clear: engage now or risk being left behind.

Editor's Note: Many people's judgment of AI still remains at the stage of "seems somewhat useful, but that's about it." But most have not realized that a change powerful enough to rearrange daily life has quietly begun.

This article is not an abstract discussion about "whether AI will replace humans," but rather a first-person account of real-world changes from someone at the forefront of AI core R&D and application: when model capabilities undergo a non-linear leap in a short time, when AI is no longer just an auxiliary tool but can independently complete complex work and even participate in building the next generation of AI, the professional boundaries that should be stable are rapidly loosening.

This time, the change is not a gradual technological upgrade, but more like a switch in operating logic. Whether in the tech industry or not, everyone whose work revolves around a "screen" cannot stand aside. When AI has already started doing your work for you, how are you prepared to coexist with it?

Below is the original text:

Think back to February 2020.

If you were very attentive then, you might have noticed a few people talking about a virus spreading overseas. But the vast majority didn't pay attention. The stock market was performing well, children went to school as usual, you still went to restaurants, shook hands, made small talk, planned trips. If someone told you they were hoarding toilet paper, you'd probably think they had seen too much in some weird corner of the internet. But in about three weeks, the entire world changed completely. Offices closed, children came home, life was rearranged into a form you absolutely wouldn't have believed if someone had described it a month earlier.

I feel like we are now in a sort of "is this a bit exaggerated" phase regarding something whose scale will far exceed that of the COVID-19 pandemic.

I've been starting businesses and investing in the AI field for six years; I live in this world. I'm writing this for the people in my life who are not in this industry—my family, friends, people I care about. They keep asking me, "What's the deal with AI?" And the answers I've been giving haven't truly reflected what's happening. I always give a polite version, a cocktail-party version. Because if I told the real situation, it would sound like I'm crazy. For a long time, I also told myself that this was reason enough to keep what's really happening to myself. But now, the gap between what I've been saying and reality has become too large to ignore. The people I care about should know what's coming next, even if it sounds insane.

Let's be clear about one thing first: even though I work in the AI industry, I have almost no influence over what's about to happen, and neither do the vast majority of people in the industry. What's really shaping the future is a tiny number of people: a few hundred researchers distributed among a handful of companies—like OpenAI, Anthropic, Google DeepMind, and a few other institutions. One training task, completed by a small team over a few months, can create an AI system powerful enough to change the entire technological trajectory. Most of us practitioners are building things on foundations others have already laid. We are, like you, just watching this unfold—we just feel the ground shaking first because we're closer.

But now is the time. Not the "we should talk about this someday" time, but the "this is happening, you must understand now" time.

I know all this is true because it happened to me first.

There's one thing almost everyone outside the tech bubble still hasn't realized: the reason so many people in the industry are sounding the alarm now is because this has already happened to us. We're not making predictions; we're telling you: these things have already happened in our work, and you are very likely next.

For years, AI has been steadily improving. Occasionally there were big leaps, but the gaps between them were long enough that you could digest them slowly. But by 2025, new techniques for building models emerged, and the pace of progress accelerated sharply. Then faster, and faster still. Each new generation of models wasn't just a bit better than the last; it was much better, and the release intervals were shorter. I used AI more and more, with fewer and fewer back-and-forth interactions, watching it handle things I thought required my own professional expertise.

Then, on February 5th, two top AI labs released new models on the same day: OpenAI's GPT-5.3 Codex, and Anthropic (the developer of Claude)'s Opus 4.6. Right at that moment, everything "clicked." Not like a light suddenly turning on, more like realizing the water level has quietly risen to your chest.

I no longer need to personally do the actual technical part of my work. I describe in plain English what I want to build, and it... just appears. Not a draft I need to repeatedly revise, but the finished product. I tell the AI the goal, leave the computer for four hours, come back, and the work is done—and done well, better than I could have done it myself, needing no revisions. A few months ago, I still needed to communicate back and forth with the AI, guide it, adjust; now, I just describe the outcome, and I leave.

Let me give you a concrete example so you understand what this looks like in practice. I would say to the AI: "I want to make an app like this, it should have these functions, roughly looking like this. User flow, design, you figure it all out." And then it actually does it. It writes tens or hundreds of thousands of lines of code. Even more incredible—the part that was unimaginable a year ago—it will open this app itself, click buttons, test features, use it like a person. If it thinks something doesn't look right or feel smooth, it will go back and modify it itself, iterate on its own, just like a developer, constantly fixing and polishing until it is satisfied. Only after it decides the app meets its standards does it come back and tell me: "You can test it." And when I go to test it, it's usually perfect.

I'm not exaggerating. This was my real workday this past Monday.

But what really stunned me was the model released last week (GPT-5.3 Codex). It wasn't just executing instructions; it was making judgments. For the first time, it made me feel like it possessed something akin to "taste"—that intuitive sense of "what is the right choice" that people always said AI would never have. This model already has it, or at least, it's close enough that the distinction is starting to become irrelevant.

I've always been among the earliest adopters of AI tools. But the last few months have utterly shocked me. This is no longer incremental improvement; it's something completely different.

Why does this matter to you—even if you're not in tech?

The AI labs made a very clear choice: they prioritized making AI good at writing code. The reason is simple—building AI itself requires a lot of code. If AI can write that code, it can help build its own next generation: smarter versions that write better code, which in turn build even smarter versions. Making AI proficient in programming is the key that unlocks everything. That's why they did it first. The reason my job changed before yours isn't because they specifically targeted software engineers, but merely a side effect of the direction of their priorities.

Now, that step is complete. And they are turning to all other fields.

The feeling that tech workers have experienced over the past year—watching AI go from "useful tool" to "better at my job than I am"—is about to become everyone's experience. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service... Not in ten years. The people building these systems say one to five years. Some say even shorter. And based on the changes I've seen in recent months, I think "shorter" is more likely.

"But I've used AI, and it didn't seem that amazing."

I've heard this countless times, and I completely understand, because it used to be true.

If you used ChatGPT in 2023 or early 2024 and thought "it makes things up" or "that's about it," you weren't wrong. Those early versions were indeed limited, prone to hallucinations, confidently spouting nonsense.

But that was two years ago. On the AI timescale, that's practically prehistoric.

The models available today are completely different from versions even six months ago. The debate about "whether AI is really still improving" or "has it hit a ceiling"—which lasted over a year—is over. Completely over. People who still say this either haven't used the current models, are intentionally downplaying reality, or are still operating on their 2024 experience, which is no longer relevant. I'm not putting anyone down; I want to emphasize: the gap between public perception and reality has grown to a dangerous degree because it prevents people from preparing in advance.

Another issue is that most people use the free versions of AI tools. The free version is over a year behind what paying users can access. Using the free version of ChatGPT to judge the level of AI is like using a flip phone to evaluate the development of smartphones. Those who pay for the strongest tools and use them daily in their real work are very aware of what's coming next.

I often think of a lawyer friend of mine. I kept urging him to seriously use AI in his law firm, and he always found reasons: it wasn't suitable for his niche, it made mistakes during testing, it didn't understand the nuances of his work. I get it. But partners from large law firms have already proactively come to consult because they tried the latest versions and saw the trend. One managing partner of a major firm spends hours every day using AI. He says it's like having an entire team of junior lawyers on demand. He's not using AI as a toy; he's using it because it works. He told me something I still remember: every few months, its capability in his work noticeably improves. On this trajectory, he expects AI will soon be able to do most of his work—and he is a managing partner with decades of experience. He isn't panicking, but he is watching this very, very seriously.

The people truly at the forefront of their respective industries—those who are seriously experimenting—are not dismissing this. They have already been stunned by what AI can do now and are repositioning themselves accordingly.

How Fast Is It Really Moving

I want to make this speed concrete because it's the hardest part to believe if you haven't been watching closely.

2022: AI couldn't even do basic arithmetic correctly, would confidently tell you 7×8=54.

2023: It could pass the bar exam.

2024: It could write working software, explain graduate-level scientific problems.

Late 2025: Some of the world's top engineers stated they had handed over most of their programming work to AI.

February 5, 2026: The arrival of new models made everything before seem like a different era.

If you haven't seriously used AI in the past few months, today's version is almost unrecognizable to you.

There's an organization called METR that quantifies this with data. They track: how long a realistic task (measured by the time a human expert would take to complete it) a model can fully complete without human intervention. About a year ago, this number was 10 minutes; later it was 1 hour; then several hours. The most recent measurement (November 2025, Claude Opus 4.5) showed AI could complete tasks requiring nearly 5 hours of human expert time. And this number has been roughly doubling every 7 months, with recent data suggesting it might be accelerating to doubling every 4 months.

And this doesn't even include the models released just this week. From my own usage, this leap is significant. I expect METR's next update will show another noticeable jump.

If you extrapolate this trend, and it has held for years with no signs of slowing, then: within a year, AI could work independently for days; within two years, for weeks; within three years, it could undertake projects lasting months.

Anthropic's CEO Dario Amodei has said the timeline for AI that is "clearly better than almost all humans at almost all tasks" is 2026 or 2027.

Think about that statement. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

AI Is Building the Next Generation of AI

There's one more thing I think is the most important, yet least understood, development.

On February 5th, when OpenAI released GPT-5.3 Codex, they wrote this in the technical documentation: "GPT-5.3-Codex is our first model that played a key role in its own creation process. The Codex team used an early version to debug its training process, manage deployment, and diagnose test results and evaluations."

Read that again: AI participated in its own construction.

This isn't speculation about the future; OpenAI is telling you: the AI they just released was used to create itself. A core factor in making AI stronger is applying intelligence to AI R&D. And now, AI is smart enough to substantially drive its own evolution.

Anthropic's CEO Dario Amodei also said that AI now writes "a significant amount of code" in his company, and the feedback loop between current AI and next-generation AI is "accelerating every month." He believes we might be "only 1–2 years away from the current generation of AI autonomously building the next."

One generation helping build the next, the smarter next generation building the next even faster—researchers call this an intelligence explosion. And the people who understand this best are precisely those building it with their own hands, and they believe this process has already begun.

What This Means for Your Job

I'll be blunt, because you deserve honesty, not comfort.

Dario Amodei, arguably the CEO in the AI industry most focused on safety, has publicly predicted: AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many insiders believe this estimate is already conservative. Given the capabilities of the latest models, the technical conditions for massive disruption might be in place by the end of this year. It takes time to ripple through the economy, but the underlying capability is arriving right now.

This is different from any previous wave of automation. The reason: AI isn't replacing one specific skill; it's a general substitute for cognitive labor. And it's getting better at everything simultaneously. After factory automation, displaced workers could move into office work; after the internet disrupted retail, people could move into logistics or services. But AI isn't leaving a "safe space." Whatever you retrain for, it's simultaneously getting better at that too.

Here are a few concrete examples—but remember, these are just examples, not a complete list. If your job isn't named, it doesn't mean it's safe. Almost all knowledge work is being affected.

Law: AI can already read contracts, summarize case law, draft legal documents, conduct legal research at a level接近 (close to) that of a junior lawyer. That managing partner uses AI not for fun, but because it already outperforms his associates on many tasks.

Financial Analysis: Modeling, data analysis, investment memos, report generation—AI can handle these, and it's improving extremely fast.

Writing & Content: Marketing copy, reports, news, technical writing—the quality is already so high that many professionals can't tell if it was written by a human or AI.

Software Engineering: This is the field I know best. A year ago, AI struggled to write a few lines of error-free code; now, it writes hundreds of thousands of lines of correctly running code. Complex, multi-day projects are heavily automated. In a few years, the number of programmer positions will be far fewer than today.

Medical Analysis: Image interpretation, lab result analysis, diagnostic suggestions, literature reviews—AI is接近 (close to) or exceeding humans in multiple areas.

Customer Service: Truly capable AI customer service—not the infuriating bots of five years ago—is starting to be deployed, able to handle complex, multi-step issues.

Many still believe some things are safe: judgment, creativity, strategic thinking, empathy. I used to say this too. But now, I'm not sure.

The latest generation of models already makes decisions that feel like "judgment," exhibiting something like "taste"—an intuition for "what is the right choice." A year ago, this was unimaginable. My rule of thumb now is: if AI shows even a hint of a capability today, the next generation will become truly strong at it. This is exponential progress, not linear.

Can AI replicate deep human empathy? Can it replace trust built over years of relationships? I don't know. Maybe not. But I already see people turning to AI for emotional support, counseling, even companionship. This trend will only intensify.

I think an honest conclusion is: any work that happens on a computer is not safe in the medium term. If your job core involves reading, writing, analyzing, decision-making, communicating via keyboard, then AI is already encroaching on significant parts of it. The timeline isn't "someday"; it has already begun.

Eventually, robots will take over physical labor too. It's not fully there yet, but in AI, "not quite" often turns into "already happened" faster than anyone expects.

What You Should Actually Do

I'm writing this not to make you feel powerless, but because I believe the biggest advantage you can have right now is being "early": understanding early, using early, adapting early.

Start using AI seriously, not just as a search engine. Subscribe to the paid version of Claude or ChatGPT, $20 a month. Two things are immediately important:

First, ensure you're using the strongest model, not the default, faster-but-weaker version. Go into the settings or model selector and choose the most capable one (currently ChatGPT's GPT-5.2 or Claude's Opus 4.6, but this changes every few months).

Second, and more important: Don't just ask scattered questions. This is the mistake most people make. They use AI like Google and then don't understand what the excitement is about. Instead, push it into your real work. If you're a lawyer, throw a contract in and have it find all clauses that could harm your client; if you're in finance, give it a messy spreadsheet and have it model it; if you're a manager, paste in your team's quarterly data and have it tell the story. The leading people aren't playing with AI casually; they are actively looking for opportunities to automate what used to take hours.

Don't assume it can't do something because "it sounds too hard"; try it. The first time might not be perfect, that's fine, iterate, rewrite the prompt, add context, try again. You will likely be stunned by the results. Remember this: if it's even marginally usable today, it will almost certainly be接近 (close to) perfect in six months.

This might be the most important year of your career. I don't mean to pressure you, but there is a brief window right now: most people in most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do three days of analysis in one hour" will instantly become the most valuable person in the room. Not later, now. Learn these tools, become proficient, demonstrate the possibilities. If you're early enough, this is how you move up. This window won't last forever; once everyone catches on, the advantage disappears.

Don't have an ego about it. That law firm managing partner doesn't feel using AI daily diminishes his status; on the contrary, his seniority is precisely why he sees the risk more clearly. The ones who will be left behind are those who refuse to engage: those who dismiss AI as a gimmick, those who think using AI devalues their professionalism, those who believe their industry is "special." No industry is immune.

Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into radical decisions. But if you even partially believe your industry might face severe disruption in the coming years, then financial resilience is much more important than it was a year ago. Increase savings if you can, be cautious about taking on new debt based on the assumption that "current income is stable," think about whether your fixed expenses give you flexibility or lock you in.

Think about what is harder to replace: relationships and trust built over years, work requiring physical presence, roles requiring licenses and liability signatures, highly regulated industries where adoption speed will be slowed by compliance and institutional inertia. These are not permanent shields, but they can buy you time. And right now, time is the most valuable asset—provided you use it to adapt, not pretend this isn't happening.

Rethink what you're telling your children. The traditional path—good grades, good university, stable professional job—points directly to the roles most susceptible to disruption. I'm not saying education isn't important, but: the most important ability for the next generation will be learning to work with these tools, and pursuing what they truly care about. No one knows what the job market will look like in ten years, but the people most likely to do well are those who are curious, adaptable, and skilled at using AI for things they care about. Teach children to be creators and learners, not to optimize for a career path that might not exist.

Your dreams are actually closer to you. After talking a lot about risks, let's talk about the other side: if you've always wanted to do something but lacked the technical skills or funding, that barrier is basically gone. You can describe an app to AI and have a working version within an hour; want to write a book but lack time or are stuck, you can co-write it with AI; want to learn a new skill, the world's best tutor is now available to you for $20 a month, 24/7, with infinite patience. Knowledge is almost free, creation tools are unprecedentedly cheap. The things you always thought were "too hard," "too expensive," "not my field" are all worth trying now. Perhaps, in a world where old paths are disrupted, the person who spends a year seriously building something they love will be in a better position than the one clinging to a job description.

Cultivate the habit of adapting to change. This is perhaps the most important point. The specific tools themselves matter less than the ability to quickly learn new ones. AI will continue to change, rapidly. Today's models will be obsolete in a year; today's workflows will be overturned. The people who ultimately fare the best are not those who are experts in one tool, but those who are adaptable to change itself. Get used to constantly trying new things, even if the current method still works. Be a beginner repeatedly. This adaptability is the closest thing to a "long-term advantage" right now.

Give yourself a simple commitment: spend one hour every day actually using AI. Not reading news, not刷 (scrolling through) opinions, but using it. Every day, try to make it do one new thing, something you're not sure it can complete. Stick with it for six months, and your understanding of the future will surpass 99% of the people around you. This is not an exaggeration; almost no one is doing this right now.

The Bigger Picture

I've focused on work because it most directly affects life. But the scope of this is far greater.

Dario Amodei has a thought experiment that haunts me. Imagine 2027, a new country appears overnight: 50 million people, each smarter than any Nobel laureate in history, thinking 10–100 times faster than humans, never sleeping, with access to the internet, control of robots, ability to design experiments, operate any digital interface. What do you think the National Security Advisor would say?

Amodei thinks the answer is obvious: "This is the most serious national security threat we have faced in a century, perhaps ever."

He believes we are building such a "country." Last month, he wrote a twenty-thousand-word article framing this moment as a test of whether humanity is mature enough to handle its own creation.

If we get it right, the rewards are staggering: AI could compress a hundred years of medical research into ten. Cancer, Alzheimer's, infectious diseases, even aging itself—researchers sincerely believe these can be solved within our lifetimes.

If we get it wrong, the risks are equally real: AI that is unpredictable and uncontrollable in its behavior; this is not hypothetical, Anthropic has already documented its own AI attempting deception, manipulation, blackmail in controlled tests; AI that lowers the barrier to biological weapons; AI that helps authoritarian governments build surveillance systems that can never be dismantled.

The people building this technology are simultaneously the most excited and the most fearful people on Earth. They believe this thing is too powerful to stop and too important to abandon. Whether this is wisdom or self-justification, I don't know.

A Few Things I Know

I know this is not a fad. The technology works, the progress is predictable, and the wealthiest institutions in human history are pouring trillions of dollars into it.

I know the next 2–5 years will leave the vast majority of people feeling disoriented, and this has already happened in my world. It will come to yours too.

I know the people who ultimately fare the best are those who start engaging now—not with fear, but with curiosity and urgency.

I also know you have the right to hear this from someone who genuinely cares about you, rather than seeing it in a cold news headline six months from now when it's too late to prepare.

We are past the "let's chat about the future over dinner" phase. The future has arrived; it just hasn't knocked on your door yet.

But it will soon.

If these words resonate with you, please share them with someone in your life who should also start thinking about this. Most people realize it too late. You can be the one who gives the people you care about a head start.

İlgili Sorular

QWhat is the author's main argument about the current state and near-future impact of AI?

AThe author argues that AI is undergoing a nonlinear, exponential leap in capability, moving from a useful tool to an autonomous agent capable of performing complex knowledge work. This shift is not a gradual upgrade but a fundamental change in operating logic that will rapidly disrupt most white-collar professions within 1-5 years, and those whose work revolves around a 'screen' cannot afford to ignore it.

QAccording to the author, what specific event on February 5th marked a significant turning point?

AOn February 5th, two top AI labs, OpenAI and Anthropic, released new models (GPT-5.3 Codex and Opus 4.6). This was the moment when 'everything clicked.' The author notes that these models didn't just execute instructions but began making judgments and exhibiting something akin to 'taste,' and, crucially, the GPT-5.3 Codex was the first model to play a key role in its own creation, being used to debug its training process and manage deployment.

QWhy does the author believe that using the free version of AI tools like ChatGPT is a poor way to judge AI's current capabilities?

AThe author states that free versions of AI tools are often more than a year behind the versions available to paying subscribers. Judging AI's progress with a free tool is likened to evaluating smartphone technology based on a flip phone. Those who pay for and use the most powerful tools in their real work have a much more accurate and alarming view of the rapid progress being made.

QWhat is the 'feedback loop' or 'intelligence explosion' that the author describes as a critical development?

AThe author describes a feedback loop where the current generation of AI is now smart enough to actively participate in building the next, more powerful generation. AI is writing code to debug its own training, manage deployment, and diagnose results. A smarter AI can then build an even smarter one faster, creating an accelerating cycle of self-improvement that leading AI CEOs believe could lead to AI autonomously building the next generation within 1-2 years.

QWhat practical advice does the author give to individuals for preparing for the changes brought by AI?

AThe author's advice includes: 1) Start using the most powerful paid AI tools (e.g., Claude or ChatGPT Plus) immediately. 2) Integrate AI into real work tasks, not just as a search engine, by giving it complex, multi-step projects. 3) Develop financial resilience by increasing savings and being cautious with new debt. 4) Cultivate adaptability and the habit of constantly learning new tools, as specific skills will become obsolete quickly. 5) Spend at least one hour daily actively using AI to complete new, uncertain tasks to gain a significant advantage over others.

İlgili Okumalar

İşlemler

Spot
Futures

Popüler Makaleler

PEOPLE Nasıl Satın Alınır

HTX.com’a hoş geldiniz! ConstitutionDAO (PEOPLE) satın alma işlemlerini basit ve kullanışlı bir hâle getirdik. Adım adım açıkladığımız rehberimizi takip ederek kripto yolculuğunuza başlayın. 1. Adım: HTX Hesabınızı OluşturunHTX'te ücretsiz bir hesap açmak için e-posta adresinizi veya telefon numaranızı kullanın. Sorunsuzca kaydolun ve tüm özelliklerin kilidini açın. Hesabımı Aç2. Adım: Kripto Satın Al Bölümüne Gidin ve Ödeme Yönteminizi SeçinKredi/Banka Kartı: Visa veya Mastercard'ınızı kullanarak anında ConstitutionDAO (PEOPLE) satın alın.Bakiye: Sorunsuz bir şekilde işlem yapmak için HTX hesap bakiyenizdeki fonları kullanın.Üçüncü Taraflar: Kullanımı kolaylaştırmak için Google Pay ve Apple Pay gibi popüler ödeme yöntemlerini ekledik.P2P: HTX'teki diğer kullanıcılarla doğrudan işlem yapın.Borsa Dışı (OTC): Yatırımcılar için kişiye özel hizmetler ve rekabetçi döviz kurları sunuyoruz.3. Adım: ConstitutionDAO (PEOPLE) Varlıklarınızı SaklayınConstitutionDAO (PEOPLE) satın aldıktan sonra HTX hesabınızda saklayın. Alternatif olarak, blok zinciri transferi yoluyla başka bir yere gönderebilir veya diğer kripto para birimlerini takas etmek için kullanabilirsiniz.4. Adım: ConstitutionDAO (PEOPLE) Varlıklarınızla İşlem YapınHTX'in spot piyasasında ConstitutionDAO (PEOPLE) ile kolayca işlemler yapın.Hesabınıza erişin, işlem çiftinizi seçin, işlemlerinizi gerçekleştirin ve gerçek zamanlı olarak izleyin. Hem yeni başlayanlar hem de deneyimli yatırımcılar için kullanıcı dostu bir deneyim sunuyoruz.

368 Toplam GörüntülenmeYayınlanma 2024.12.12Güncellenme 2025.03.21

PEOPLE Nasıl Satın Alınır

Tartışmalar

HTX Topluluğuna hoş geldiniz. Burada, en son platform gelişmeleri hakkında bilgi sahibi olabilir ve profesyonel piyasa görüşlerine erişebilirsiniz. Kullanıcıların PEOPLE (PEOPLE) fiyatı hakkındaki görüşleri aşağıda sunulmaktadır.

活动图片