Google's Deep Think Dominates Eight-Language Olympiads, Autonomously Solves Four Unsolved Problems, Research Barriers Collapse

marsbit2026-04-08 tarihinde yayınlandı2026-04-08 tarihinde güncellendi

Özet

Google DeepMind's "Deep Think" AI system has demonstrated exceptional performance across eight languages in regional academic competitions, including mathematics and informatics Olympiads. It achieved perfect scores in Japanese and French contests, and high results in Chinese, Korean, Hindi, Vietnamese, Russian, and Portuguese exams. This multi-language capability aims to reduce linguistic barriers in scientific research, enabling non-English-speaking researchers to access advanced AI tools equally. Beyond competitions, Deep Think has solved four previously unsolved mathematical problems and contributed to breakthroughs in computer science, physics, and economics. It powers the Aletheia agent, which autonomously generates and verifies research-level mathematical solutions. Despite these achievements, the results are based on internal evaluations without third-party verification or detailed methodology disclosure. Google positions Deep Think as a "human intelligence multiplier," expanding AI's role in global scientific collaboration beyond English-dominated benchmarks.

"Deep Think has defeated/matched competitors in all competitions"!

Just now, Google DeepMind senior researcher Conglong Li posted 12 messages on the X platform, revealing an unprecedented scorecard.

One AI, the same brain, eight exam papers in different languages, all submitted with high scores.

Such results are rare for any model.

From IMO Gold Medals to Full Coverage of Regional Competitions

Deep Think's high scores across multiple leaderboards are not a sudden breakthrough but part of a nearly year-long evolution of capabilities.

First, it topped the most rigorous reasoning competitions.

In July 2025, Gemini Deep Think achieved the gold medal standard at the International Mathematical Olympiad (IMO) for the first time, scoring 35 out of 42 points. It also achieved similarly high-level performance at the ICPC World Finals around the same time.

These two achievements have been officially announced in the DeepMind blog.

Google DeepMind subsequently included these two results in its official blog, marking Deep Think's crossing of the "world-class competition threshold" in mathematics and programming.

Next, Deep Think began moving from "world-champion-level individual breakthroughs" to "systematic validation across languages, disciplines, and scenarios."

In February 2026, Google published three blog posts.

One introduced the Gemini 3.1 Pro model itself, one detailed a major upgrade to the Deep Think specialized reasoning mode, and one from the DeepMind scientific discovery team directly positioned Deep Think as a "human intelligence multiplier."

The upgraded Deep Think delivered a series of hard metrics:

48.4% on Humanity's Last Exam (without tool assistance), 84.6% on ARC-AGI-2 (officially verified by the ARC Prize Foundation), a Codeforces competitive programming Elo rating of 3455, and gold medal-level performance on the written portions of the 2025 International Physics and Chemistry Olympiads.

The strategy is very clear: first use world-class competitions like the IMO and ICPC to prove its powerful reasoning abilities, then use multi-language, regional competition, and cross-disciplinary Olympiad results to prove its general, deep reasoning ability that stably transfers across languages and domains.

Gemini Deep Think's capability evolution from IMO gold medals to PhD-level research acceleration

A Detailed Look at the 8-Language Scorecard

Now, let's take a closer look at this scorecard.

Japanese results are the most impressive.

2025 35th Japanese Mathematical Olympiad Finals (JMO Finals), perfect score.

ICPC Asia Japan Preliminary Contest, perfect score.

Among these, the JMO Finals score even exceeded the level corresponding to the top 80% of scores that year, meeting the official "gold medal equivalent" standard.

French results were also a perfect 100%.

The Chinese results are interesting.

At the 41st Chinese Mathematical Olympiad (CMO), Deep Think scored 86.3%, which is quite outstanding. But at the Chinese National Olympiad in Informatics (NOI), it only scored 63.3%.

The gap between 86.3% and 63.3% outlines the real boundaries of AI reasoning ability.

In math competitions, the model faces abstract deduction, proof construction, and multi-step reasoning, which happens to be Deep Think's strongest suit.

But in informatics competitions, the problem is not just "figuring it out," but also translating logic into executable code, controlling boundary conditions, considering complexity constraints, and avoiding implementation errors.

The former is closer to pure reasoning, while the latter requires "reasoning + algorithm design + engineering implementation" to be successful simultaneously.

In the other languages—Korean, Hindi, Vietnamese, Russian, Portuguese—Deep Think also achieved results that either defeated competitors or at least matched them.

Looking at Japanese, French, and Chinese together, the most unusual aspect this time is not necessarily scoring a perfect mark in any single subject, but rather that the same model, the same Deep Think reasoning system, delivered first-tier results on exam papers in multiple languages.

Is This Scorecard Reliable?

But there is a key omission:

Conglong Li did not list specific comparative data from competitors: all results come from Google evaluations. There is no independent third-party replication, no official certification from the competitions, and the evaluation methodology is completely undisclosed.

Was each problem attempted once or many times with the best score taken? How much computational power was used during reasoning? Was there any manual prompt engineering involved?

These details, which directly affect the credibility of the results, were also not mentioned.

Another easily overlooked point: these exams are all regional selection competitions, not international finals.

There is an order of magnitude difference in difficulty between regional competition problems and international finals.

The researcher explicitly stated that these results "will be included in the model card." As of publication, the model card has not been officially updated.

So, for now, this still seems like a scorecard graded by the examinee themselves, announced by themselves, and not yet stamped by the academic affairs office.

Multilingual Research Equity: The Overlooked Real Battlefield

Why did Google specifically invest effort in evaluating 8 different regional languages?

Current evaluations of AI reasoning ability are almost entirely based on English.

MATH, GSM8K, HumanEval, ARC-AGI... these are all in English.

Mathematicians, physicists, and engineers worldwide whose native language is not English must first overcome a language barrier when using AI research tools.

Google's selection of these 8 languages is not random.

Japanese, Korean, and Chinese cover East Asian research powerhouses; Hindi and Vietnamese cover emerging markets; French, Russian, and Portuguese cover Europe and South America.

Together, this represents the majority of global research output.

In its official blog, DeepMind positioned Deep Think as a "human intelligence multiplier," saying it can "handle knowledge retrieval and rigorous verification, allowing scientists to focus on conceptual depth and creative direction."

Combined with these multi-language results, the subtext of this statement is not hard to understand: this multiplier is not just for scientists who use English.

More notably is how far Deep Think has already gone in research落地 (landing/application).

DeepMind announced a mathematical research agent called Aletheia, powered by Deep Think, capable of autonomously generating, verifying, and revising solutions to research-level mathematical problems.

Aletheia, driven by Deep Think, capable of iterative generation, verification, and correction for research-level mathematical problems

Aletheia has already contributed to multiple research papers, one of which was completed entirely autonomously by the AI, calculating specific structural constants in arithmetic geometry.

Furthermore, in a semi-autonomous evaluation of 700 open mathematical problems, it independently solved 4 previously unsolved problems.

The Gemini Deep Think mode also shows great potential in computer science, physics, economics, and other fields.

In computer science, Deep Think helped refute a conjecture that had remained open for a decade; in physics, it found a new analytical solution for gravitational radiation from cosmic strings; in economics, it extended an auction theory theorem.

Schematic diagram of the AI reasoning process, showing how large-scale exploration of the solution space at the network layer is aggregated into structured reasoning and confirmed through automated and manual verification.

By collaborating with experts to solve 18 research challenges, the advanced version of Gemini Deep Think helped break through long-standing bottlenecks in algorithms, machine learning and combinatorial optimization, information theory, and economics.

This goes far beyond "solving competition problems."

While competitors are still competing on English benchmark leaderboards, Google has already found a new battlefield in the "AI research accelerator" field.

The most important thing about this is not the scores; the real signal behind it is: the language barrier for AI research tools is being treated as an engineering problem to be solved.

If this path succeeds, scientists conducting research in Japanese, Korean, Chinese, Hindi, and other languages will, for the first time, stand on the same starting line as native English speakers.

This time, Google has laid its cards on the table.

As for which competitors will follow suit, we believe we will see soon.

References:

https://blog.google/intl/ja-jp/company-news/technology/gemini-31-pro-gemini-31-pro-deep-think/%20

https://deepmind.google/blog/accelerating-mathematical-and-scientific-discovery-with-gemini-deep-think/%20

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/%20

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think/

This article is from the WeChat public account "新智元" (New Zhiyuan), author: 新智元

İlgili Sorular

QWhat is the key achievement of Google's Deep Think AI model as reported in the article?

ADeep Think achieved top-tier results in eight different language versions of academic competitions, including perfect scores in Japanese and French math and programming contests, and high performance in Chinese, Korean, Hindi, Vietnamese, Russian, and Portuguese exams.

QWhich specific world-class competitions did Deep Think first demonstrate its reasoning capabilities in?

ADeep Think first demonstrated its reasoning capabilities by reaching gold medal standards in the International Mathematical Olympiad (IMO) with a score of 35 out of 42 in July 2025, and achieving similarly high performance in the ICPC World Finals.

QWhat is the significance of Deep Think's performance across multiple languages according to the article?

AIts performance across multiple languages signifies a breakthrough in breaking down language barriers in AI research tools, potentially allowing non-English speaking scientists worldwide to access advanced AI research assistance on equal footing.

QWhat are some research breakthroughs mentioned that were achieved using Deep Think?

ADeep Think autonomously solved 4 previously unsolved mathematical problems, refuted a decade-old conjecture in computer science, found new analytical solutions for cosmic string gravitational radiation in physics, and extended an auction theory theorem in economics.

QWhat concerns does the article raise about the reliability of Deep Think's reported results?

AThe article notes that all results are from internal Google evaluations without third-party verification, official contest authentication, or disclosure of testing methods such as attempt counts, computational resources used, or potential human prompt engineering involvement.

İlgili Okumalar

Where Is the AI Infrastructure Industry Chain Stuck?

The AI infrastructure (AI Infra) industry chain is facing unprecedented systemic bottlenecks, despite the rapid emergence of applications like DeepSeek and Seedance 2.0. The surge in global computing demand has exposed critical constraints across multiple layers of the supply chain—from core manufacturing equipment and data center cabling to specialty materials and cleanroom facilities. Key challenges include four major "walls": - **Memory Wall**: High-bandwidth memory (HBM) and DRAM face structural shortages as AI inference demand outpaces training, with new capacity not expected until 2027. - **Bandwidth Wall**: Data transfer speeds lag behind computing power, causing multi-level bottlenecks in-chip, between chips, and across data centers. - **Compute Wall**: Advanced chip manufacturing, reliant on EUV lithography and monopolized by ASML, remains the fundamental constraint, with supply chain fragility affecting production. - **Power Wall**: While energy demand from data centers is rising, power supply is a solvable near-term challenge through diversified energy infrastructure. Expansion is further hindered by shortages in testing equipment, IC substrates (critical for GPUs and seeing price hikes over 30%), specialty materials like low-CTE glass fiber, and high-end cleanroom facilities. Connection technologies are evolving, with copper cables resurging for short-range links due to cost and latency advantages, while optical solutions dominate long-range scenarios. Innovations like hollow-core fiber and advanced PCB technologies (e.g., glass substrates, mSAP) are emerging to meet bandwidth needs. In summary, AI Infra bottlenecks are multidimensional, spanning compute, memory, bandwidth, power, and supply chain logistics. Advanced chip manufacturing remains the core constraint, while substrate, material, and equipment shortages present immediate challenges. The industry is moving toward hybrid copper-optical solutions and accelerated domestic supply chain development.

marsbit15 dk önce

Where Is the AI Infrastructure Industry Chain Stuck?

marsbit15 dk önce

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

DeepSeek V4's repeated delay in early 2026 has sparked global discussions on "de-CUDA-ization" in AI. The highly anticipated trillion-parameter open-source model is undergoing deep adaptation to Huawei’s Ascend chips using the CANN framework, representing China’s first systematic attempt to run a core AI model outside the CUDA ecosystem. This shift, however, comes with significant engineering challenges. While the model uses a MoE architecture to reduce computational load, it places extreme demands on memory bandwidth, chip interconnects, and system scheduling—areas where NVIDIA’s mature CUDA ecosystem currently excels. Migrating to Ascend introduces complexities in hardware topology, communication latency, and software optimization due to CANN’s relative immaturity compared to CUDA. The move highlights a broader strategic dilemma: short-term compatibility with CUDA offers practical benefits and faster adoption, as seen in CANN’s efforts to emulate CUDA interfaces. Yet, long-term over-reliance on compatibility risks inheriting CUDA’s limitations and stifling native innovation. If global AI shifts away from transformer-based architectures, strict compatibility could lead to technological obsolescence. Despite these challenges, DeepSeek V4’s eventual release could demonstrate the viability of a full domestic AI stack and accelerate CANN’s ecosystem growth. However, true technological independence will require building an original software-hardware paradigm beyond compatibility—a critical task for China’s AI ambitions in the next 3-5 years.

marsbit33 dk önce

Autonomy or Compatibility: The Choice Facing China's AI Ecosystem Behind the Delay of DeepSeek V4

marsbit33 dk önce

How Blockchain Fills the Identity, Payment, and Trust Gaps for AI Agents?

AI Agents are rapidly evolving into autonomous economic participants, but they face critical gaps in identity, payment, and trust infrastructure. They currently lack standardized ways to prove who they are, what they are authorized to do, and how they should be compensated across different environments. Blockchain technology is emerging as a solution to these challenges by providing a neutral coordination layer. Public ledgers offer auditable credentials, wallets enable portable identities, and stablecoins serve as a programmable settlement layer. A key bottleneck is the absence of a universal identity standard for non-human entities—akin to "Know Your Agent" (KYA)—which would allow Agents to operate with verifiable, cryptographically signed credentials. Without this, Agents remain fragmented and face barriers to interoperability. Additionally, as AI systems take on governance roles, there is a risk that centralized control over models could undermine decentralized governance in practice. Cryptographic guarantees on training data, prompts, and behavior logs are essential to ensure Agents act in users' interests. Stablecoins and crypto-native payment rails are becoming the default for Agent-to-Agent commerce, enabling seamless, low-cost transactions for AI-native services. These systems support permissionless, programmable payments without traditional merchant onboarding. Finally, as AI scales, human oversight becomes impractical. Trust must be built into system architecture through verifiable provenance, on-chain attestations, and decentralized identity systems. The future of Agent economies depends on cryptographically enforced accountability, allowing users to delegate tasks with clearly defined constraints and transparent operation logs.

marsbit1 saat önce

How Blockchain Fills the Identity, Payment, and Trust Gaps for AI Agents?

marsbit1 saat önce

İşlemler

Spot
Futures
活动图片