Exploring Physical World AGI with "Visual Reasoning", ElorianAI Raises $55 Million

marsbitОпубліковано о 2026-04-23Востаннє оновлено о 2026-04-23

Анотація

ElorianAI, co-founded by ex-Google AI expert Andrew Dai and former AI specialist Yinfei Yang, has raised $55 million in early funding to develop next-generation AI systems with advanced visual reasoning capabilities. While current large models excel in text-based tasks like programming and math, they perform poorly in visual reasoning—even top models like Gemini only match a 3-year-old’s ability in basic visual benchmarks. The key limitation lies in the architecture of current vision-language models (VLMs), which first convert visual inputs into text before reasoning, losing critical spatial and structural information. ElorianAI aims to build a native multimodal model that processes and reasons directly in visual space, enabling deeper understanding of physical relationships, constraints, and environments. The company plans to release a state-of-the-art visual reasoning model by 2026, with potential applications in robotics, disaster management, engineering, healthcare, and AI hardware. By using high-quality, diverse, and synthetically generated data, ElorianAI intends to create models that don’t just perceive but truly understand and reason about the physical world—bringing us closer to visual AGI.

By Alpha Community

AI large models have surpassed average humans in certain areas, such as programming and mathematics. Reports indicate that Anthropic has almost achieved 100% AI programming internally, and Google's Gemini Deep Think solved 5 out of 6 problems in IMO 2025, reaching gold medal level.

However, in visual reasoning, even the leading Gemini 3 Pro only reached the level of a 3-year-old child on BabyVision, a benchmark testing basic visual reasoning abilities.

Why are large models strong in programming and mathematics but weak in visual reasoning? This is due to limitations in their "thinking process." Visual Language Models (VLMs) need to first convert visual input into language and then perform text-based reasoning. However, many visual tasks cannot be accurately described in words, resulting in poor visual reasoning capabilities of the models.

Andrew Dai, who worked at Google DeepMind for 14 years, teamed up with Apple's seasoned AI expert Yinfei Yang to establish a company called Elorian AI. Their goal is to elevate the model's visual reasoning ability from "child level" to "adult level," enabling the model to natively "think" within the "visual space" and thereby advance toward AGI in the physical world.

Elorian AI raised $55 million in early-stage funding co-led by Striker Venture Partners, Menlo Ventures, and Altimeter, with participation from 49 Palms and top AI scientists including Jeff Dean.

Pioneers in Multimodal Models Aim to Equip Visual Models with Reasoning Abilities

Andrew Dai, who is of Chinese descent, holds a bachelor's degree in computer science from Cambridge and a PhD in machine learning from Edinburgh. He interned at Google during his PhD and joined the company in 2012, staying for 14 years until starting his own business.


Image Source: Andrew Dai's LinkedIn

Shortly after joining Google, he co-authored the first paper on language model pre-training and supervised fine-tuning, "Semi-supervised Sequence Learning," with Quoc V. Le. This paper laid the foundation for the birth of GPT. Another foundational paper of his is "Glam: Efficient scaling of language models with mixture-of-experts," which paved the way for the now mainstream MoE architecture.

Image Source: Google

During his time at Google, he was deeply involved in almost all large model trainings, from Palm to Gemini 1.5 and Gemini 2.5. Under Jeff Dean's arrangement, he began leading the data division of Gemini (including synthetic data) in 2023, and the team later expanded to hundreds of people.

Image Source: Yinfei Yang's LinkedIn

Co-founding Elorian AI with Andrew Dai is Yinfei Yang, who worked at Google Research for four years, focusing on multimodal representation learning, before joining Apple to lead multimodal model R&D.

Image Source: arxiv

His representative research, "Scaling up visual and vision-language representation learning with noisy text supervision," advanced the development of multimodal representation learning.

Elorian AI's co-founders also include Seth Neel, who was an Assistant Professor at Harvard University and is an expert in data and AI.

Why discuss the groundbreaking papers written by Elorian AI's co-founders? Because their goal is not just engineering optimization but a paradigm shift at the foundational architecture level, upgrading AI from text-based intelligent understanding to vision-based intelligent understanding.

The current state of AI models is that, despite excelling in text-based tasks, even the most advanced frontier multimodal large models still stumble on the most basic visual grounding tasks.

For example, how to fit a part precisely into a mechanical device to make it run more accurately and efficiently? Such spatial physical tasks are simple for elementary school students but challenging for existing multimodal large models.

This brings us back to biology for clues. In the human brain, vision is the underlying substrate supporting many thinking processes. Humans' ability to use visual and spatial reasoning is far more ancient than language-based logical reasoning.

For instance, teaching someone to navigate a maze using language can be confusing, but drawing a sketch makes it instantly understandable.

Even a bird, without language, can recognize and reason about geographical features through vision to achieve global long-distance migration. This is a strong signal that vision is likely the correct direction for truly advancing machine reasoning.

So, imagine if, from the very beginning of model construction, this biological visual instinct is encoded into AI's genes, building a native multimodal model that "simultaneously understands and processes text, images, video, and audio," enabling the model to possess visual understanding capabilities. Andrew Dai and his team aim to build an innate "synesthete," teaching machines not only to "see" the world but also to "understand" it.

To Andrew Dai and his team, a deep understanding of the real "physical world" is the key to achieving the next leap in machine intelligence and ultimately reaching "Visual AGI."

VLMs with Post-Reasoning Are Not the Right Path to Visual Reasoning

There have been teams attempting this before. In fact, Andrew Dai's previous Gemini team was already among the global leaders in the multimodal field. However, traditional multimodal models are still primarily VLMs (Visual Language Models), built on a "two-step" logic: first converting visual input into language, then performing text-based reasoning (sometimes assisted by external tools).

However, post-reasoning inherently has limitations. On one hand, it is prone to model hallucinations; on the other, many visual tasks cannot be precisely described in words.

Additionally, visual generation models like NanoBanana excel in multimodal generation, but generation ability does not equal reasoning ability. The "thinking" before generation still relies on language models, not native reasoning capability.

To develop models that truly understand the spatial, structural, and relational complexities of the visual world, disruptive innovation at the underlying technology level is necessary.

So, how to innovate? Elorian AI's founders, with years of experience in the multimodal field, approach this by deeply integrating multimodal training with a new architecture specifically designed for multimodal reasoning. They abandon the traditional approach of treating images as static input, instead training models to directly interact with and manipulate visual representations to autonomously parse their structure, relationships, and physical constraints.

Of course, another core element is data, which is crucial to the performance and success of these models.

Andrew Dai stated that they place great importance on data quality, data mix ratios, data sources, and data diversity. They have innovated at the data layer, reconstructing the reasoning chain in visual space, and are extensively and deeply using synthetic data.

Combined, these efforts will give rise to new AI systems that move beyond simple visual "perception" to high-level visual "reasoning."

This AI system could be a visual reasoning foundation model: building a highly general but exceptionally proficient model in a specific capability set—visual reasoning.

As a general foundation model, its application areas should be broad.

First, in the robotics field, it could become the underlying neural center of powerful systems,赋予ing them the ability to operate autonomously in various unfamiliar environments.

For example, sending a robot to handle a sudden safety fault in a hazardous environment requires the robot to make quick and accurate instant decisions. If the robot lacks a foundation model with deep reasoning capabilities, people wouldn't dare let it randomly press buttons or operate levers. But if it has strong reasoning能力, it might think: "Before operating this panel, maybe I should pull this lever first to activate the safety mechanism."

Furthermore, in disaster management, models with visual reasoning could analyze satellite images to monitor and prevent forest fires. In engineering, they could accurately understand complex visual blueprints and system diagrams. The significance of this ability lies in the fact that the operating principles of the physical world are fundamentally different from the pure code world. You can't design an airplane wing just by typing a few lines of pure code.

However, Elorian AI's models and capabilities are currently still on paper. They plan to release a model in 2026 that achieves SOTA level in visual reasoning. At that time, we can verify if their results match their claims.

When AI Truly Possesses "Visual Reasoning" Ability, How Will It Change the Physical World?

To enable AI to understand and influence the real physical world, technology has iterated several times.

From image recognition in the traditional CV era, to image generation models/multimodal models in generative AI, to world models, the understanding of the physical world has been continuously enhanced.

Visual reasoning foundation models could take it a step further. Because achieving visual reasoning allows AI to understand the physical world more deeply, thereby achieving a higher level of machine intelligence.

Imagine, when models with deep understanding and fine operation empower the embodied intelligence industry and the AI hardware industry, it will greatly expand their application scope. For example, robots could perform more reliable industrial production or work in medical care; AI hardware, especially wearable devices, could become smarter personal assistants.

However, underlying these technologies is still data. As Andrew Dai mentioned earlier, data quality, data mix ratios, data sources, and data diversity all determine model performance.

In the physical AI field, Chinese companies, whether at the model level or the data level, are closer to world leadership compared to text large models. If they can leverage their advantages of richer data and application scenarios to accelerate iteration speed, then whether in embodied intelligence or AI hardware, whether applied in industry, healthcare, or homes, there is a greater opportunity to reach leading levels and potentially produce world-class enterprises.

Пов'язані питання

QWhat is the main goal of current Vision Language Models (VLMs) according to the article, and what are their limitations?

AThe main goal of VLMs is to process visual input by first converting it into language and then performing text-based reasoning. Their limitation is that many visual tasks cannot be accurately described with text, leading to poor visual reasoning capabilities.

QWho are the founders of Elorian AI and what are their backgrounds?

AThe founders are Andrew Dai, a former Google DeepMind researcher with 14 years of experience, and Yinfei Yang, an AI expert who worked at Google Research and Apple. Andrew Dai contributed to foundational papers in language model pre-training and MoE architecture, while Yinfei Yang focused on multimodal representation learning.

QHow does Elorian AI plan to improve AI's visual reasoning capabilities?

AElorian AI aims to develop a native multimodal model that processes text, images, video, and audio simultaneously. They focus on integrating multimodal training with new architectures designed for visual reasoning, directly interacting with visual representations to parse structures and physical constraints, and using high-quality, diverse synthetic data.

QWhat potential applications are mentioned for AI with advanced visual reasoning skills?

AApplications include robotics for autonomous operations in unfamiliar environments, disaster management through satellite image analysis, engineering by interpreting complex visual diagrams, and enhancing AI hardware like wearable devices for personal assistance.

QWhen does Elorian AI plan to release their model, and what is the expected achievement?

AElorian AI plans to release a model in 2026 that achieves state-of-the-art (SOTA) performance in visual reasoning, aiming to elevate capabilities from 'child-level' to 'adult-level'.

Пов'язані матеріали

Торгівля

Спот
Ф'ючерси

Популярні статті

Як купити AR

Ласкаво просимо до HTX.com! Ми зробили покупку Arweave (AR) простою та зручною. Дотримуйтесь нашої покрокової інструкції, щоб розпочати свою криптовалютну подорож.Крок 1: Створіть обліковий запис на HTXВикористовуйте свою електронну пошту або номер телефону, щоб зареєструвати обліковий запис на HTX безплатно. Пройдіть безпроблемну реєстрацію й отримайте доступ до всіх функцій.ЗареєструватисьКрок 2: Перейдіть до розділу Купити крипту і виберіть спосіб оплатиКредитна/дебетова картка: використовуйте вашу картку Visa або Mastercard, щоб миттєво купити Arweave (AR).Баланс: використовуйте кошти з балансу вашого рахунку HTX для безперешкодної торгівлі.Треті особи: ми додали популярні способи оплати, такі як Google Pay та Apple Pay, щоб підвищити зручність.P2P: Торгуйте безпосередньо з іншими користувачами на HTX.Позабіржова торгівля (OTC): ми пропонуємо індивідуальні послуги та конкурентні обмінні курси для трейдерів.Крок 3: Зберігайте свої Arweave (AR)Після придбання Arweave (AR) збережіть його у своєму обліковому записі на HTX. Крім того, ви можете відправити його в інше місце за допомогою блокчейн-переказу або використовувати його для торгівлі іншими криптовалютами.Крок 4: Торгівля Arweave (AR)Легко торгуйте Arweave (AR) на спотовому ринку HTX. Просто увійдіть до свого облікового запису, виберіть торгову пару, укладайте угоди та спостерігайте за ними в режимі реального часу. Ми пропонуємо зручний досвід як для початківців, так і для досвідчених трейдерів.

433 переглядів усьогоОпубліковано 2024.12.11Оновлено 2025.03.21

Як купити AR

Обговорення

Ласкаво просимо до спільноти HTX. Тут ви можете бути в курсі останніх подій розвитку платформи та отримати доступ до професійної ринкової інформації. Нижче представлені думки користувачів щодо ціни AR (AR).

活动图片