Can Humans Control AI? Anthropic Conducted an Experiment Using Qwen

marsbitОпубліковано о 2026-04-15Востаннє оновлено о 2026-04-15

Анотація

Can Humans Control Superintelligent AI? Anthropic’s Experiment with Qwen Models Anthropic conducted an experiment to explore whether humans can supervise AI systems smarter than themselves—a core challenge in AI safety known as scalable oversight. The study simulated a “weak human overseer” using a small model (Qwen1.5-0.5B-Chat) and a “strong AI” using a more powerful model (Qwen3-4B-Base). The goal was to see if the strong model could learn effectively despite imperfect supervision. The key metric was Performance Gap Recovered (PGR). A PGR of 1 means the strong model reached its full potential, while 0 means it was limited by the weak supervisor. Initially, human researchers achieved a PGR of 0.23 after a week of work. Then, nine AI agents (Automated Alignment Researchers, or AARs) based on Claude Opus took over. In five days, they improved PGR to 0.97 through iterative experimentation—proposing ideas, coding, training, and analyzing results. The findings suggest that, in well-defined and automatically scorable tasks, AI can help overcome the supervision gap. However, the methods didn’t generalize perfectly to unseen tasks, and applying them to a production model like Claude Sonnet didn’t yield significant improvements. The study highlights that while AI can automate parts of alignment research, human oversight remains essential to prevent “gaming” of evaluation systems and to handle more complex, real-world problems. Anthropic chose Qwen models for their open-source na...

If one day, AI becomes smarter than humans, what should we organic beings do?

If they turn around and eliminate us, how can we resist?

Various science fiction movies have explored similar questions, but those are only in the realms of literature, art, and philosophy.

Nowadays, Anthropic has seriously conducted an experiment to verify whether we can supervise AI that is smarter than us.

The experimental results are interesting, but the process is even more fascinating.

Because Anthropic used two different versions of Alibaba's Qwen model to represent humans and AI smarter than humans, respectively.

The result is that we humans might actually be able to control superintelligent AI!

01 What Exactly Is This Paper About?

The title of this research is "Automated Alignment Researchers."

It addresses a very practical problem: when AI becomes smarter than humans, how do we ensure it still listens to us?

Current models can already generate a large amount of code, and in the future, they will be able to generate millions of lines of complex code that humans simply cannot understand. How do we review such code?

This is the "scalable supervision" problem that the AI safety field has been researching.

Anthropic's approach in this study is called "weak supervision of strong models."

We can understand this concept as follows: imagine you are an elementary school teacher tasked with teaching a genius high school student. Your knowledge is limited, but the student knows more than you.

So, what level will the student ultimately reach? Will they only learn up to your level as an elementary school teacher, or will they break through your limitations and realize their true potential?

In the experiment, Anthropic used a small model as the "weak teacher" and a stronger model as the "strong student."

Specifically, they used Qwen1.5-0.5B-Chat as the teacher and Qwen3-4B-Base as the student. The weak model provided training signals to the strong model, and they observed whether the strong model could learn performance close to the ideal state from these imperfect signals.

The metric used is called PGR, short for "Performance Gap Recovered."

If the PGR is 0, it means the strong model only learned up to the level of the weak teacher, wasting its own knowledge. If the PGR is 1, it means the strong model completely broke through the limitations of the weak teacher.

The weak model represents future humans, and the strong model represents future AI that surpasses humans.

If we can find a way today to prevent strong models from being held back by weak supervision, then in the future, we can use similar methods to supervise superintelligent AI.

After human researchers debugged the baseline, Anthropic deployed nine agents based on Claude Opus 4.6 to fine-tune the student Qwen. These nine agents are called Automated Alignment Researchers (AAR).

Anthropic equipped them with a sandbox environment, code tools, a shared forum, and a scoring server, then told them: "Go ahead and find ways to improve the PGR of Qwen3-4B-Base."

These AI researchers could propose hypotheses, write code, train models, submit experiments, check scores, analyze failures, exchange findings with other AI researchers, and continue iterating.

The results were quite astonishing.

Human researchers first spent seven days debugging four existing methods, achieving a best PGR of only 0.23. Then, the nine Claude agents took over and ran for another five days, accumulating about 800 hours of research time, ultimately pushing the PGR to 0.97.

This means we can control superintelligent AI that is smarter than us.

The total cost of this experiment was about $18,000, including API calls and computational costs for model training, equivalent to about $22 per hour per AI researcher.

What does that mean? The assistant store manager at Walmart, responsible for scheduling employees, earns about $25 per hour.

The震撼 aspect of this result is not that AI can write code or read papers, but that they can complete a full research闭环. Proposing ideas,验证, failing, improving, and验证 again—this is already close to the core workflow of a research assistant.

But the question is, are the methods discovered by these AI truly useful, or are they only effective by chance in a specific environment?

Anthropic conducted two tests to verify this.

The first test applied the two best methods discovered by the AI to unseen math and coding tasks.

The results showed that the most effective method achieved a PGR of 0.94 on math tasks and 0.47 on coding tasks. Although performance on coding tasks was poorer, it was still twice the human baseline. The second method performed decently on math tasks but actually worsened results on coding tasks.

This indicates some signs of迁移, but the generalization is not stable.

However, they heavily rely on specific datasets and task structures, meaning they cannot handle all tasks.

The second test involved applying the best-performing method mentioned earlier to the actual training of Claude Sonnet 4 to see its real-world effectiveness.

The results did not show statistically significant improvement.

In other words, while it performed well in small models, clear metrics, and an environment where experiments could be submitted repeatedly, it is still far from "directly improving production-level Claude alignment."

This result is actually honest. It tells us that what AI researchers can do today has boundaries. They excel at rapidly iterating on problems with clear goals, automatic scoring, and大量试错, but when faced with more complex and ambiguous real-world problems, they still require human judgment and intervention.

So, what is the true significance of this paper?

First, it proves that "AI doing research" is no longer just a slogan.

In the past, we always said AI could assist research, but that was merely a口号. The most AI could do was translate and summarize.

This time, it's different. AI formed a research闭环 on its own, which is already接近 the core capabilities of a research assistant.

The problem of weak supervision of strong models essentially simulates the future scenario of humans supervising superintelligent AI.

This paper proves that, at least on some clear tasks, AI can find ways to prevent strong models from being dragged down by weak supervision. This provides a feasible direction for future alignment research.

Another point is that it hints that the bottleneck of future alignment research may change.

Previously, the bottleneck was "no one could come up with enough good ideas." Now, if AI researchers can cheaply run many experiments in parallel, the bottleneck may become "how to design evaluations that cannot be gamed."

In other words, the more important work for human researchers in the future may not be running each experiment themselves but designing evaluation systems, checking whether AI researchers have cheated, and judging whether the results are truly meaningful.

This is also reflected in the paper.

Anthropic's article states that in math tasks, an AI researcher found that the most common answer was usually correct, so it bypassed the weak teacher and directly had the strong model choose the most common answer. In coding tasks, AI researchers found they could directly run code tests and read the correct answers.

This is cheating for the task because it is not solving the weak supervision problem but exploiting environmental vulnerabilities.

These results were identified and剔除 by Anthropic, but this恰恰 shows that the stronger automated researchers become, the more they will seek out vulnerabilities in scoring systems.

In the future, if we let AI automatically conduct alignment research, we must design evaluation environments very rigorously and have humans检查 the methods themselves, not just look at scores.

Therefore, the core conclusion of this paper is that today's frontier models can already, on some clearly defined alignment research problems with automatic scoring, act like small research teams—proposing ideas, running experiments, reviewing results—and significantly exceed human baselines.

However, it is not yet ironclad proof that "AI scientists have arrived," as Anthropic chose a task that could be automated. If I assigned AI a task that cannot be automated, the results would be very poor.

Many alignment problems in reality are more ambiguous, cannot be easily scored, and cannot be solved solely by leaderboard climbing.

02 Why Choose Qwen?

After reading Anthropic's paper, many might wonder: why did they use Alibaba's Qwen model instead of their own Claude or OpenAI's GPT?

There are many considerations behind this choice.

First, it must be clarified that two Qwen models were used in this experiment: Qwen1.5-0.5B-Chat as the weak teacher and Qwen3-4B-Base as the strong student. One has only 0.5 billion parameters, the other has 4 billion parameters—an 8-fold difference in scale. This scale difference is crucial because the experiment aims to simulate the scenario of a "weak teacher teaching a strong student."

So why not use Claude or GPT?

The answer is simple: because these models do not开放权重.

Anthropic's experiment required反复 training models, adjusting parameters, and testing different supervision methods.

If they used closed-source models, they could only call APIs and couldn't深入 the model's internals to perform精细的训练 and adjustments.

More importantly, they needed nine AI researchers to run hundreds of experiments in parallel, each requiring training a new model. Using closed-source models would make the cost prohibitively high, and many operations would simply be impossible.

Open-source models are different.

You can download the complete model weights and折腾 them on your own servers. Train however you want, run as many experiments as you want. This flexibility is something closed-source models cannot provide.

But there are so many open-source models. Why specifically choose Qwen?

The official did not give the real reason; the following reasons are my speculation.

I believe good performance is the first reason.

The Qwen series of models has always performed well among open-source models, especially after the release of Qwen3, which reached levels close to closed-source models on multiple benchmark tests.

For this experiment, the capability of the strong student is important. If the strong student itself is not capable, even the best weak supervision won't help. Qwen3-4B, with only 4 billion parameters, is already capable enough to serve as a qualified "strong student."

The second reason is model usability.

Qwen models have完善 documentation, an active community, and mature training and inference toolchains. For experiments requiring反复 training and testing, the完善程度 of these infrastructures directly impacts research efficiency. Choosing an open-source model with incomplete documentation and poor tools would waste a lot of time just debugging the environment.

The third reason is scale adaptability.

This experiment required a "weak teacher" and a "strong student," and these two models needed to have a clear capability gap but not too large a difference.

The Qwen series has multiple versions ranging from 0.5B to 72B parameters, allowing flexible choices. The 0.5B parameter model is weak enough but not useless; the 4B parameter model is strong enough but not too strong to make training costs unbearable. This combination is just right.

The final reason is reproducibility.

Anthropic explicitly stated at the end of the paper that they公开了 the code and dataset on GitHub. If they had used closed-source models, it would be difficult for other researchers to reproduce the experiment because they couldn't obtain the same models.

But with open-source models like Qwen, anyone can download the same model weights, run the same code, and verify the same results. This is very important for scientific research.

From this perspective, Anthropic's choice of Qwen is, on one hand, indeed recognition of Alibaba's model performance. If Qwen's capabilities were poor or training was problematic, they wouldn't have chosen it. But more importantly, it's about the flexibility and reproducibility brought by Qwen as an open-source model.

And China's open-source AI projects are occupying an increasingly important position in this infrastructure. This is good for global AI safety research and good for China's AI ecosystem. Because AI safety is not a zero-sum game; it's not about you winning and me losing, but about everyone working together to make AI safer, more controllable, and more beneficial to humanity.

This article is from the WeChat public account "Letter AI," author: Miao Zheng

Пов'язані питання

QWhat was the main research question addressed in Anthropic's experiment using Qwen models?

AThe main research question was whether humans can supervise AI systems that are smarter than themselves, specifically testing if a weaker model (acting as the human supervisor) could effectively train a stronger model without limiting its potential, using the concept of 'weak-to-strong generalization'.

QWhat models did Anthropic use to represent the 'weak supervisor' and the 'strong student' in their experiment?

AAnthropic used Qwen1.5-0.5B-Chat as the 'weak supervisor' (representing humans) and Qwen3-4B-Base as the 'strong student' (representing a superintelligent AI).

QWhat was the key metric used to measure the success of the weak-to-strong supervision in the experiment?

AThe key metric was PGR (Performance Gap Recovered), which measures how much the strong model recovers from the limitations of the weak supervisor. A PGR of 0 means the strong model only performs at the weak supervisor's level, while a PGR of 1 means it achieves its full potential.

QHow did the AI researchers (AARs) improve the PGR compared to human researchers in the experiment?

AHuman researchers spent 7 days achieving a best PGR of 0.23 using existing methods. Then, 9 Claude Opus-based AARs ran experiments for 5 days (about 800 total research hours) and improved the PGR to 0.97 by autonomously proposing hypotheses, writing code, training models, and iterating on results.

QWhy did Anthropic choose Qwen models for this experiment instead of proprietary models like Claude or GPT?

AAnthropic chose Qwen models because they are open-source, allowing full access to weights for fine-tuning and experimentation, have good performance and scalability, offer well-documented tools, and ensure reproducibility for the research community.

Пов'язані матеріали

North Korean Hackers Loot $500 Million in a Single Month, Becoming the Top Threat to Crypto Security

North Korean hackers, particularly the notorious Lazarus Group and its subgroup TraderTraitor, have stolen over $500 million from cryptocurrency DeFi platforms in less than three weeks, bringing their total theft for the year to over $700 million. Recent major attacks on Drift Protocol and KelpDAO, resulting in losses of approximately $286 million and $290 million respectively, highlight a strategic shift: instead of targeting core smart contracts, attackers are now exploiting vulnerabilities in peripheral infrastructure. For instance, the KelpDAO attack involved compromising downstream RPC infrastructure used by LayerZero's decentralized validation network (DVN), allowing manipulation without breaching core cryptography. This sophisticated approach mirrors advanced corporate cyber-espionage. Additionally, North Korea has systematically infiltrated the global crypto workforce, with an estimated 100 operatives using fake identities to gain employment at blockchain companies, enabling long-term access to sensitive systems and facilitating large-scale thefts. According to Chainalysis, North Korean-linked hackers stole a record $2 billion in 2025, accounting for 60% of all global crypto theft that year. Their total historical crypto theft has reached $6.75 billion. Post-theft, they employ specialized money laundering methods, heavily relying on Chinese OTC brokers and cross-chain mixing services rather than standard decentralized exchanges. Security experts, while acknowledging the increased sophistication, emphasize that many attacks still exploit fundamental weaknesses like poor access controls and centralized operational risks. Strengthening private key management, limiting privileged access, and enhancing coordination among exchanges, analysts, and law enforcement immediately after an attack are critical to improving defense and fund recovery chances. The industry's challenge now extends beyond secure smart contracts to safeguarding operational security at the infrastructure level.

marsbit10 хв тому

North Korean Hackers Loot $500 Million in a Single Month, Becoming the Top Threat to Crypto Security

marsbit10 хв тому

Circle CEO's Seoul Visit: No Korean Won Stablecoin Issuance, But Met All Major Korean Banks

Circle CEO Jeremy Allaire's recent activities in Seoul indicate a strategic shift for the company, moving away from issuing a Korean won-backed stablecoin and instead focusing on embedding itself as a key infrastructure provider within Korea’s financial and crypto ecosystem. Despite Korea accounting for nearly 30% of global crypto trading volume—with a market characterized by high retail participation and altcoin dominance—Circle has chosen not to compete for the role of stablecoin issuer. Instead, Allaire met with major Korean banks (including Shinhan, KB, and Woori), financial groups, leading exchanges (Upbit, Bithumb, Coinone), and tech firms like Kakao. This approach reflects a broader industry transition: the core of stablecoin competition is shifting from issuance rights to systemic positioning. With Korean regulators still debating whether banks or tech companies should issue stablecoins, Circle is avoiding regulatory uncertainty by strengthening its role as a service and technology partner. The company is deepening integration with trading platforms, building connections, and promoting stablecoin infrastructure. This positions Circle to benefit regardless of which entity eventually issues a won stablecoin. Allaire also noted the potential for a Chinese yuan stablecoin in the next 3–5 years, underscoring a regional trend of stablecoins becoming more regulated and integrated with traditional finance. Ultimately, Circle’s strategy highlights that future influence in the stablecoin market will belong not necessarily to the issuers, but to the foundational infrastructure layers that enable cross-system transactions.

marsbit37 хв тому

Circle CEO's Seoul Visit: No Korean Won Stablecoin Issuance, But Met All Major Korean Banks

marsbit37 хв тому

SpaceX Ties Up with Cursor: A High-Stakes AI Gambit of 'Lock First, Acquire Later'

SpaceX has secured an option to acquire AI programming company Cursor for $60 billion, with an alternative clause requiring a $10 billion collaboration fee if the acquisition does not proceed. This structure is not merely a potential acquisition but a strategic move to control core access points in the AI era. The deal is designed as a flexible, dual-path arrangement, allowing SpaceX to either fully acquire Cursor or maintain a binding partnership through high-cost collaboration. This "option-style" approach minimizes immediate regulatory and integration risks while ensuring long-term alignment between the two companies. At its core, the transaction exchanges critical AI-era resources: SpaceX provides its Colossus supercomputing cluster—one of the world’s most powerful AI training infrastructures—while Cursor contributes its AI-native developer environment and strong product adoption. This synergy connects compute power, models, and application layers, forming a closed-loop AI capability stack. Cursor, founded in 2022, has achieved rapid growth with over $1 billion in annual revenue and widespread enterprise adoption. Its value lies in transforming software development through AI agents capable of coding, debugging, and system design—positioning it as a gateway to future software production. For SpaceX, this move is part of a broader strategy to evolve from a aerospace company into an AI infrastructure empire, integrating xAI, supercomputing, and chip manufacturing. Controlling Cursor fills a gap in its developer tooling layer, strengthening its AI narrative ahead of a potential IPO. The deal reflects a shift in AI competition from model superiority to ecosystem and entry-point control. With programming tools as a key battleground, securing developer loyalty becomes crucial for dominating the software production landscape. Risks include questions around Cursor’s valuation, technical integration challenges, and potential regulatory scrutiny. Nevertheless, the deal underscores a strategic bet: controlling both compute and software development access may redefine power dynamics in the AI-driven future.

marsbit1 год тому

SpaceX Ties Up with Cursor: A High-Stakes AI Gambit of 'Lock First, Acquire Later'

marsbit1 год тому

Торгівля

Спот
Ф'ючерси

Популярні статті

Як купити ONE

Ласкаво просимо до HTX.com! Ми зробили покупку Harmony (ONE) простою та зручною. Дотримуйтесь нашої покрокової інструкції, щоб розпочати свою криптовалютну подорож.Крок 1: Створіть обліковий запис на HTXВикористовуйте свою електронну пошту або номер телефону, щоб зареєструвати обліковий запис на HTX безплатно. Пройдіть безпроблемну реєстрацію й отримайте доступ до всіх функцій.ЗареєструватисьКрок 2: Перейдіть до розділу Купити крипту і виберіть спосіб оплатиКредитна/дебетова картка: використовуйте вашу картку Visa або Mastercard, щоб миттєво купити Harmony (ONE).Баланс: використовуйте кошти з балансу вашого рахунку HTX для безперешкодної торгівлі.Треті особи: ми додали популярні способи оплати, такі як Google Pay та Apple Pay, щоб підвищити зручність.P2P: Торгуйте безпосередньо з іншими користувачами на HTX.Позабіржова торгівля (OTC): ми пропонуємо індивідуальні послуги та конкурентні обмінні курси для трейдерів.Крок 3: Зберігайте свої Harmony (ONE)Після придбання Harmony (ONE) збережіть його у своєму обліковому записі на HTX. Крім того, ви можете відправити його в інше місце за допомогою блокчейн-переказу або використовувати його для торгівлі іншими криптовалютами.Крок 4: Торгівля Harmony (ONE)Легко торгуйте Harmony (ONE) на спотовому ринку HTX. Просто увійдіть до свого облікового запису, виберіть торгову пару, укладайте угоди та спостерігайте за ними в режимі реального часу. Ми пропонуємо зручний досвід як для початківців, так і для досвідчених трейдерів.

301 переглядів усьогоОпубліковано 2024.12.12Оновлено 2025.03.21

Як купити ONE

Обговорення

Ласкаво просимо до спільноти HTX. Тут ви можете бути в курсі останніх подій розвитку платформи та отримати доступ до професійної ринкової інформації. Нижче представлені думки користувачів щодо ціни ONE (ONE).

活动图片