Auto Research Era: 47 Tasks Without Standard Answers Become the Must-Test Leaderboard for Agent Capabilities

marsbitPubblicato 2026-05-13Pubblicato ultima volta 2026-05-13

Introduzione

The article introduces Frontier-Eng Bench, a new benchmark for AI agents developed by Einsia AI's Navers lab. Unlike traditional tests with clear answers, this benchmark presents 47 complex, real-world engineering tasks—such as optimizing underwater robot stability, battery fast-charging protocols, or quantum circuit noise control—where there is no single correct solution, only continuous optimization towards a limit. It shifts AI evaluation from static knowledge retrieval to a dynamic "engineering closed-loop": the AI must propose solutions, run simulations, interpret errors, adjust parameters, and re-run experiments to iteratively improve performance. This process tests an agent's ability to learn and evolve through long-term feedback, much like a human engineer tackling trade-offs between power, safety, and performance. Key findings from the benchmark reveal two patterns: 1) Improvements follow a power-law decay, becoming harder and smaller as optimization progresses, and 2) While exploring multiple solution paths (breadth) helps, sustained depth in a single path is crucial for breakthrough innovations. The research suggests this marks a step toward "Auto Research," where AI systems can autonomously conduct continuous, tireless optimization in scientific and engineering domains. Humans would set high-level goals, while AI agents handle the iterative experimentation and refinement. This could fundamentally change research and development workflows.

If we throw AI into an engineering site with no standard answers, can it still survive?

For a long time, AI Agents have appeared omnipotent, but in reality, most are just 'flipping through memories' within known knowledge bases.

Yet the real engineering world is harsh: the stability of underwater robots, the lithium plating boundary of power batteries, the noise control of quantum circuits... These problems have no 'perfect score', only 'optimizations that inch closer to the limit'.

Recently, the Agent Benchmark released by Navers lab under Einsia AIFrontier-Eng Bench—officially tore off the label of AI being an 'exam-crammer'.

The research team didn't have AI grind through outdated coding problems. Instead, they gave it a complete 'engineering closed loop': propose a solution, connect to the simulator, digest errors, adjust parameters, and re-run.

Faced with 47 hardcore tasks spanning multiple disciplines, AI must behave like a senior engineer, seeking the optimal solution within the 'impossible triangle' of power consumption, safety, and performance.

This is not just a test suite; it's more like a rehearsal for Agent 'evolution'.

When AI begins to learn self-correction from feedback, the Auto Research era, where 'humans set goals and AI iterates non-stop 24/7', might be closer than we imagine.

AI Starts Tackling 'Hard Work'

Past large language models were more like super straight-A students.

You pose a question, it 'flips through memory' from massive training data, then pieces together an answer that seems plausible.

In this mode, the large model is essentially playing 'word chain', not solving real-world problems.

But the emergence of Frontier-Eng Bench has AI doing the work of 'engineering optimization'.

The process has shifted to letting AI first propose a solution, then connect to a simulator to run experiments, subsequently obtain feedback and errors, modify parameters and code, and continue re-running until performance improves further.

In this closed-loop system, AI's identity undergoes a qualitative change.

Want to make the underwater robot more stable? AI must start automatically tuning the controller.

Want to increase the speed of the robotic arm a bit more? AI has to run simulations itself.

To some extent, AIs have shed their purely semantic understanding role and begun to act like professional engineers, continuously optimizing based on real-world environmental feedback.

The most interesting aspect of Frontier-Eng Bench is: it doesn't test whether AI 'answered correctly', but rather whether AI can continuously become stronger.

Because real engineering optimization is never about multiple-choice questions; there is no single standard answer.

Take fast-charging batteries as an example: the goal sounds simple—charge as fast as possible, but reality isn't so easy.

Under strict constraints like temperature mustn't spike, voltage can't overspeed, battery life can't drop too fast, and lithium plating must be avoided, AI must precisely hit the balance point of performance.

This means AI cannot pass through by any clever 'test-cramming' tricks; it must demonstrate endurance for continuous evolution through long-term feedback.

Can AI perform long-term optimization in real environments?

Looking at the results, GPT5.4 showed the most stable overall performance, but AIs still have a long way to go before 'solving' the Benchmark.

Auto Research Enters the 'Iterative Optimization' Era

The research team raised a very interesting point in their paper:

Truly advanced intelligence essentially relies on long-term feedback loops.

Just as AlphaGo could defeat Lee Sedol, it lay in the vast number of simulations and immediate feedback behind each decision, not the rote memorization of established game records.

True scientific research is the same: top labs don't rely on a single burst of inspiration, but continuously propose hypotheses, run experiments, examine results, modify plans, and try again.

Engineering optimization follows the same principle: anyone can create the first version; what's truly difficult is that final 1% performance leap.

The significance of Frontier-Eng Bench lies here: For the first time, it systematically begins testing AI's 'iterative optimization capability', and has summarized two nearly brutal laws of AI evolution.

The first law is: The further you go, the harder the improvement.

This paper found that the frequency and magnitude of Agent improvements follow a power-law decay:

  • Improvement frequency ∝ 1 / iteration count
  • Improvement magnitude ∝ 1 / improvement count

Simply put: the fastest gains come in the first few rounds, and it gets progressively harder and smaller later on.

This closely resembles the real R&D process: the first version of AI can quickly eliminate many 'low-hanging fruits', but the closer it gets to the bottleneck, the more effort is required to squeeze out even a bit more performance.

Would it be more cost-effective to explore multiple paths in parallel for trial and error? The answer lies in the second law.

The second law: Breadth is useful, but depth is even more indispensable.

Running multiple parallel paths can avoid getting stuck, but with a fixed budget, each additional chain opened shallows the depth of exploration.

Many engineering breakthroughs require continuous accumulation and constant correction before structural leaps emerge; they can't be achieved simply by 'trying a few more times'.

This actually points towards the development direction of next-generation Agents: not models that 'output an answer once', but systems that can continuously iterate and self-evolve within long-term feedback loops.

AI Engineers Might Really Be Coming

The true far-reaching significance of this research lies in its preliminary outline of an AI system beginning to approach the real engineering cycle.

Imagine when AI connects to industrial software, simulation environments, CAD systems, chip design tools, scientific computing platforms...

A dramatic transformation in the modality of productivity is on the verge of emerging.

In future labs, a division of labor like this might appear:

Human researchers are responsible for proposing directions and goals.

For example, 'reduce this component's energy consumption by 30%', 'compress this model's forward pass GPU usage even lower', 'increase the stability of robot control a bit more', 'push the fidelity of this quantum circuit closer to the limit', etc.

And AI is responsible for 'grinding the path'. They focus on these goals, continuously optimizing.

For example, automatically running simulations and experiments, automatically reading feedback from verifiers and simulators, then continuing to modify and optimize, iterating non-stop 24/7.

This evolutionary logic frees AI from the identity of an 'assistive tool', allowing it to begin solving complex system problems like a real engineering team—and tirelessly at that.

And the issues revealed by the Frontier-Eng Benchmark are actually very direct:

When AI begins to learn 'long-term optimization', how far is it from true engineering intelligence?

Paper Title: Frontier-Eng: Benchmarking Self-Evolving Agents on Real-World Engineering Tasks with Generative Optimization

Project Homepage: https://lab.einsia.ai/frontier-eng/

Arxiv: https://arxiv.org/abs/2604.12290

GitHub repo: https://github.com/EinsiaLab/Frontier-Engineering

This article is from the WeChat public account "Quantum Bit", author: Yun Zhong

Domande pertinenti

QWhat is the main purpose of the Frontier-Eng Benchmark released by Einsteina AI's Navers lab?

AThe main purpose of the Frontier-Eng Benchmark is to move beyond testing AI's ability to recall known information. It systematically tests AI agents' capability for 'iterative optimization' on 47 real-world, open-ended engineering tasks without standard answers, evaluating if they can continuously improve performance through a feedback loop involving simulation, error analysis, and parameter adjustment.

QHow does the AI's role change in the Frontier-Eng Benchmark testing process compared to traditional language models?

AIn the Frontier-Eng Benchmark, the AI transitions from acting as a 'super student' that retrieves and assembles answers from training data to performing 'engineering optimization.' Its role becomes akin to a professional engineer: it proposes solutions, runs simulations, analyzes feedback and errors, modifies parameters/code, and reruns experiments in a continuous loop to seek optimal performance under complex constraints.

QWhat are the two key 'AI evolution laws' discovered through the Frontier-Eng Benchmark regarding iterative optimization?

AThe two key laws are: 1) Improvements become progressively harder and smaller (showing a power-law decay: Improvement frequency ∝ 1/iteration count, Improvement magnitude ∝ 1/improvement count). 2) While exploring multiple parallel paths (breadth) is useful, sustained depth in a single optimization path is more critical for achieving structural breakthroughs, as fixed budgets force a trade-off between breadth and depth.

QWhat future work paradigm does the article suggest might emerge from the development of self-evolving AI agents?

AThe article suggests a future 'Auto Research' paradigm where human researchers define the goals and direction (e.g., 'reduce component energy consumption by 30%'), and AI agents take on the role of 'grinding the path.' They would work autonomously and tirelessly—running simulations, interpreting feedback from verifiers and simulators, and iteratively optimizing—24/7 to approach performance limits.

QAccording to the article, what fundamental shift in AI capability does the Frontier-Eng Benchmark represent?

AThe Frontier-Eng Benchmark represents a fundamental shift from evaluating AI's ability to find predetermined 'correct answers' to testing its capacity for 'self-evolution' through long-term feedback loops. It moves the focus to whether AI can demonstrate sustained learning and improvement in complex, real-world scenarios with no single correct answer, pushing AI closer to genuine engineering intelligence.

Letture associate

Wall Street Capital Enters ARC, Circle Sparks a System-Level Competition for Stablecoins

Wall Street Capital Enters the ARC Arena as Circle Launches a System-Level Battle for Stablecoin Dominance On May 11, Circle successfully raised $222 million in a pre-sale funding round for its new blockchain and native token, ARC, giving the network a fully diluted valuation of $3 billion. The investor lineup, featuring Wall Street giants like BlackRock and Intercontinental Exchange alongside top-tier venture capital firms such as a16z and ARK Invest, signaled a collective strategic bet on future financial infrastructure. This move marks Circle's significant evolution from a stablecoin issuer (notably USDC) to a designer of financial systems. While USDC operates on external blockchains like Ethereum, making Circle dependent on their performance, ARC aims to create a dedicated infrastructure for the circulation, payment, and clearing of stablecoins. This would integrate currency issuance and circulation into one system, potentially shifting Circle's business model from asset management to infrastructure provision. The convergence of traditional finance and crypto-native capital in this funding round underscores a broader industry shift: stablecoins are transitioning from being mere trading tools to becoming core components of financial infrastructure. By controlling both the issuance (via USDC) and the流通 pathway (via ARC), Circle could establish a closed-loop system from issuance to settlement. If successful, this infrastructure could optimize costs, lower barriers for institutional adoption, and promote standardization in on-chain finance. Ultimately, it has the potential to challenge traditional systems like SWIFT in areas such as cross-border payments, representing a possible step toward the重构 of global financial infrastructure.

marsbit4 h fa

Wall Street Capital Enters ARC, Circle Sparks a System-Level Competition for Stablecoins

marsbit4 h fa

Trading

Spot
Futures

Articoli Popolari

Come comprare ERA

Benvenuto in HTX.com! Abbiamo reso l'acquisto di Caldera (ERA) semplice e conveniente. Segui la nostra guida passo passo per intraprendere il tuo viaggio nel mondo delle criptovalute.Step 1: Crea il tuo Account HTXUsa la tua email o numero di telefono per registrarti il tuo account gratuito su HTX. Vivi un'esperienza facile e sblocca tutte le funzionalità,Crea il mio accountStep 2: Vai in Acquista crypto e seleziona il tuo metodo di pagamentoCarta di credito/debito: utilizza la tua Visa o Mastercard per acquistare immediatamente CalderaERA.Bilancio: Usa i fondi dal bilancio del tuo account HTX per fare trading senza problemi.Terze parti: abbiamo aggiunto metodi di pagamento molto utilizzati come Google Pay e Apple Pay per maggiore comodità.P2P: Fai trading direttamente con altri utenti HTX.Over-the-Counter (OTC): Offriamo servizi su misura e tassi di cambio competitivi per i trader.Step 3: Conserva Caldera (ERA)Dopo aver acquistato Caldera (ERA), conserva nel tuo account HTX. In alternativa, puoi inviare tramite trasferimento blockchain o scambiare per altre criptovalute.Step 4: Scambia Caldera (ERA)Scambia facilmente Caldera (ERA) nel mercato spot di HTX. Accedi al tuo account, seleziona la tua coppia di trading, esegui le tue operazioni e monitora in tempo reale. Offriamo un'esperienza user-friendly sia per chi ha appena iniziato che per i trader più esperti.

333 Totale visualizzazioniPubblicato il 2025.07.17Aggiornato il 2025.07.17

Come comprare ERA

Discussioni

Benvenuto nella Community HTX. Qui puoi rimanere informato sugli ultimi sviluppi della piattaforma e accedere ad approfondimenti esperti sul mercato. Le opinioni degli utenti sul prezzo di ERA ERA sono presentate come di seguito.

活动图片