By Alpha Community
AI large models have surpassed average humans in certain areas, such as programming and mathematics. Reports indicate that Anthropic has almost achieved 100% AI programming internally, and Google's Gemini Deep Think solved 5 out of 6 problems in IMO 2025, reaching gold medal level.
However, in visual reasoning, even the leading Gemini 3 Pro only reached the level of a 3-year-old child on BabyVision, a benchmark testing basic visual reasoning abilities.
Why are large models strong in programming and mathematics but weak in visual reasoning? This is due to limitations in their "thinking process." Visual Language Models (VLMs) need to first convert visual input into language and then perform text-based reasoning. However, many visual tasks cannot be accurately described in words, resulting in poor visual reasoning capabilities of the models.
Andrew Dai, who worked at Google DeepMind for 14 years, teamed up with Apple's seasoned AI expert Yinfei Yang to establish a company called Elorian AI. Their goal is to elevate the model's visual reasoning ability from "child level" to "adult level," enabling the model to natively "think" within the "visual space" and thereby advance toward AGI in the physical world.
Elorian AI raised $55 million in early-stage funding co-led by Striker Venture Partners, Menlo Ventures, and Altimeter, with participation from 49 Palms and top AI scientists including Jeff Dean.
Pioneers in Multimodal Models Aim to Equip Visual Models with Reasoning Abilities
Andrew Dai, who is of Chinese descent, holds a bachelor's degree in computer science from Cambridge and a PhD in machine learning from Edinburgh. He interned at Google during his PhD and joined the company in 2012, staying for 14 years until starting his own business.
Image Source: Andrew Dai's LinkedIn
Shortly after joining Google, he co-authored the first paper on language model pre-training and supervised fine-tuning, "Semi-supervised Sequence Learning," with Quoc V. Le. This paper laid the foundation for the birth of GPT. Another foundational paper of his is "Glam: Efficient scaling of language models with mixture-of-experts," which paved the way for the now mainstream MoE architecture.
During his time at Google, he was deeply involved in almost all large model trainings, from Palm to Gemini 1.5 and Gemini 2.5. Under Jeff Dean's arrangement, he began leading the data division of Gemini (including synthetic data) in 2023, and the team later expanded to hundreds of people.
Co-founding Elorian AI with Andrew Dai is Yinfei Yang, who worked at Google Research for four years, focusing on multimodal representation learning, before joining Apple to lead multimodal model R&D.
His representative research, "Scaling up visual and vision-language representation learning with noisy text supervision," advanced the development of multimodal representation learning.
Elorian AI's co-founders also include Seth Neel, who was an Assistant Professor at Harvard University and is an expert in data and AI.
Why discuss the groundbreaking papers written by Elorian AI's co-founders? Because their goal is not just engineering optimization but a paradigm shift at the foundational architecture level, upgrading AI from text-based intelligent understanding to vision-based intelligent understanding.
The current state of AI models is that, despite excelling in text-based tasks, even the most advanced frontier multimodal large models still stumble on the most basic visual grounding tasks.
For example, how to fit a part precisely into a mechanical device to make it run more accurately and efficiently? Such spatial physical tasks are simple for elementary school students but challenging for existing multimodal large models.
This brings us back to biology for clues. In the human brain, vision is the underlying substrate supporting many thinking processes. Humans' ability to use visual and spatial reasoning is far more ancient than language-based logical reasoning.
For instance, teaching someone to navigate a maze using language can be confusing, but drawing a sketch makes it instantly understandable.
Even a bird, without language, can recognize and reason about geographical features through vision to achieve global long-distance migration. This is a strong signal that vision is likely the correct direction for truly advancing machine reasoning.
So, imagine if, from the very beginning of model construction, this biological visual instinct is encoded into AI's genes, building a native multimodal model that "simultaneously understands and processes text, images, video, and audio," enabling the model to possess visual understanding capabilities. Andrew Dai and his team aim to build an innate "synesthete," teaching machines not only to "see" the world but also to "understand" it.
To Andrew Dai and his team, a deep understanding of the real "physical world" is the key to achieving the next leap in machine intelligence and ultimately reaching "Visual AGI."
VLMs with Post-Reasoning Are Not the Right Path to Visual Reasoning
There have been teams attempting this before. In fact, Andrew Dai's previous Gemini team was already among the global leaders in the multimodal field. However, traditional multimodal models are still primarily VLMs (Visual Language Models), built on a "two-step" logic: first converting visual input into language, then performing text-based reasoning (sometimes assisted by external tools).
However, post-reasoning inherently has limitations. On one hand, it is prone to model hallucinations; on the other, many visual tasks cannot be precisely described in words.
Additionally, visual generation models like NanoBanana excel in multimodal generation, but generation ability does not equal reasoning ability. The "thinking" before generation still relies on language models, not native reasoning capability.
To develop models that truly understand the spatial, structural, and relational complexities of the visual world, disruptive innovation at the underlying technology level is necessary.
So, how to innovate? Elorian AI's founders, with years of experience in the multimodal field, approach this by deeply integrating multimodal training with a new architecture specifically designed for multimodal reasoning. They abandon the traditional approach of treating images as static input, instead training models to directly interact with and manipulate visual representations to autonomously parse their structure, relationships, and physical constraints.
Of course, another core element is data, which is crucial to the performance and success of these models.
Andrew Dai stated that they place great importance on data quality, data mix ratios, data sources, and data diversity. They have innovated at the data layer, reconstructing the reasoning chain in visual space, and are extensively and deeply using synthetic data.
Combined, these efforts will give rise to new AI systems that move beyond simple visual "perception" to high-level visual "reasoning."
This AI system could be a visual reasoning foundation model: building a highly general but exceptionally proficient model in a specific capability set—visual reasoning.
As a general foundation model, its application areas should be broad.
First, in the robotics field, it could become the underlying neural center of powerful systems,赋予ing them the ability to operate autonomously in various unfamiliar environments.
For example, sending a robot to handle a sudden safety fault in a hazardous environment requires the robot to make quick and accurate instant decisions. If the robot lacks a foundation model with deep reasoning capabilities, people wouldn't dare let it randomly press buttons or operate levers. But if it has strong reasoning能力, it might think: "Before operating this panel, maybe I should pull this lever first to activate the safety mechanism."
Furthermore, in disaster management, models with visual reasoning could analyze satellite images to monitor and prevent forest fires. In engineering, they could accurately understand complex visual blueprints and system diagrams. The significance of this ability lies in the fact that the operating principles of the physical world are fundamentally different from the pure code world. You can't design an airplane wing just by typing a few lines of pure code.
However, Elorian AI's models and capabilities are currently still on paper. They plan to release a model in 2026 that achieves SOTA level in visual reasoning. At that time, we can verify if their results match their claims.
When AI Truly Possesses "Visual Reasoning" Ability, How Will It Change the Physical World?
To enable AI to understand and influence the real physical world, technology has iterated several times.
From image recognition in the traditional CV era, to image generation models/multimodal models in generative AI, to world models, the understanding of the physical world has been continuously enhanced.
Visual reasoning foundation models could take it a step further. Because achieving visual reasoning allows AI to understand the physical world more deeply, thereby achieving a higher level of machine intelligence.
Imagine, when models with deep understanding and fine operation empower the embodied intelligence industry and the AI hardware industry, it will greatly expand their application scope. For example, robots could perform more reliable industrial production or work in medical care; AI hardware, especially wearable devices, could become smarter personal assistants.
However, underlying these technologies is still data. As Andrew Dai mentioned earlier, data quality, data mix ratios, data sources, and data diversity all determine model performance.
In the physical AI field, Chinese companies, whether at the model level or the data level, are closer to world leadership compared to text large models. If they can leverage their advantages of richer data and application scenarios to accelerate iteration speed, then whether in embodied intelligence or AI hardware, whether applied in industry, healthcare, or homes, there is a greater opportunity to reach leading levels and potentially produce world-class enterprises.










