Exploring Physical World AGI with "Visual Reasoning", ElorianAI Raises $55 Million

marsbitPublicado em 2026-04-23Última atualização em 2026-04-23

Resumo

ElorianAI, co-founded by ex-Google AI expert Andrew Dai and former AI specialist Yinfei Yang, has raised $55 million in early funding to develop next-generation AI systems with advanced visual reasoning capabilities. While current large models excel in text-based tasks like programming and math, they perform poorly in visual reasoning—even top models like Gemini only match a 3-year-old’s ability in basic visual benchmarks. The key limitation lies in the architecture of current vision-language models (VLMs), which first convert visual inputs into text before reasoning, losing critical spatial and structural information. ElorianAI aims to build a native multimodal model that processes and reasons directly in visual space, enabling deeper understanding of physical relationships, constraints, and environments. The company plans to release a state-of-the-art visual reasoning model by 2026, with potential applications in robotics, disaster management, engineering, healthcare, and AI hardware. By using high-quality, diverse, and synthetically generated data, ElorianAI intends to create models that don’t just perceive but truly understand and reason about the physical world—bringing us closer to visual AGI.

By Alpha Community

AI large models have surpassed average humans in certain areas, such as programming and mathematics. Reports indicate that Anthropic has almost achieved 100% AI programming internally, and Google's Gemini Deep Think solved 5 out of 6 problems in IMO 2025, reaching gold medal level.

However, in visual reasoning, even the leading Gemini 3 Pro only reached the level of a 3-year-old child on BabyVision, a benchmark testing basic visual reasoning abilities.

Why are large models strong in programming and mathematics but weak in visual reasoning? This is due to limitations in their "thinking process." Visual Language Models (VLMs) need to first convert visual input into language and then perform text-based reasoning. However, many visual tasks cannot be accurately described in words, resulting in poor visual reasoning capabilities of the models.

Andrew Dai, who worked at Google DeepMind for 14 years, teamed up with Apple's seasoned AI expert Yinfei Yang to establish a company called Elorian AI. Their goal is to elevate the model's visual reasoning ability from "child level" to "adult level," enabling the model to natively "think" within the "visual space" and thereby advance toward AGI in the physical world.

Elorian AI raised $55 million in early-stage funding co-led by Striker Venture Partners, Menlo Ventures, and Altimeter, with participation from 49 Palms and top AI scientists including Jeff Dean.

Pioneers in Multimodal Models Aim to Equip Visual Models with Reasoning Abilities

Andrew Dai, who is of Chinese descent, holds a bachelor's degree in computer science from Cambridge and a PhD in machine learning from Edinburgh. He interned at Google during his PhD and joined the company in 2012, staying for 14 years until starting his own business.


Image Source: Andrew Dai's LinkedIn

Shortly after joining Google, he co-authored the first paper on language model pre-training and supervised fine-tuning, "Semi-supervised Sequence Learning," with Quoc V. Le. This paper laid the foundation for the birth of GPT. Another foundational paper of his is "Glam: Efficient scaling of language models with mixture-of-experts," which paved the way for the now mainstream MoE architecture.

Image Source: Google

During his time at Google, he was deeply involved in almost all large model trainings, from Palm to Gemini 1.5 and Gemini 2.5. Under Jeff Dean's arrangement, he began leading the data division of Gemini (including synthetic data) in 2023, and the team later expanded to hundreds of people.

Image Source: Yinfei Yang's LinkedIn

Co-founding Elorian AI with Andrew Dai is Yinfei Yang, who worked at Google Research for four years, focusing on multimodal representation learning, before joining Apple to lead multimodal model R&D.

Image Source: arxiv

His representative research, "Scaling up visual and vision-language representation learning with noisy text supervision," advanced the development of multimodal representation learning.

Elorian AI's co-founders also include Seth Neel, who was an Assistant Professor at Harvard University and is an expert in data and AI.

Why discuss the groundbreaking papers written by Elorian AI's co-founders? Because their goal is not just engineering optimization but a paradigm shift at the foundational architecture level, upgrading AI from text-based intelligent understanding to vision-based intelligent understanding.

The current state of AI models is that, despite excelling in text-based tasks, even the most advanced frontier multimodal large models still stumble on the most basic visual grounding tasks.

For example, how to fit a part precisely into a mechanical device to make it run more accurately and efficiently? Such spatial physical tasks are simple for elementary school students but challenging for existing multimodal large models.

This brings us back to biology for clues. In the human brain, vision is the underlying substrate supporting many thinking processes. Humans' ability to use visual and spatial reasoning is far more ancient than language-based logical reasoning.

For instance, teaching someone to navigate a maze using language can be confusing, but drawing a sketch makes it instantly understandable.

Even a bird, without language, can recognize and reason about geographical features through vision to achieve global long-distance migration. This is a strong signal that vision is likely the correct direction for truly advancing machine reasoning.

So, imagine if, from the very beginning of model construction, this biological visual instinct is encoded into AI's genes, building a native multimodal model that "simultaneously understands and processes text, images, video, and audio," enabling the model to possess visual understanding capabilities. Andrew Dai and his team aim to build an innate "synesthete," teaching machines not only to "see" the world but also to "understand" it.

To Andrew Dai and his team, a deep understanding of the real "physical world" is the key to achieving the next leap in machine intelligence and ultimately reaching "Visual AGI."

VLMs with Post-Reasoning Are Not the Right Path to Visual Reasoning

There have been teams attempting this before. In fact, Andrew Dai's previous Gemini team was already among the global leaders in the multimodal field. However, traditional multimodal models are still primarily VLMs (Visual Language Models), built on a "two-step" logic: first converting visual input into language, then performing text-based reasoning (sometimes assisted by external tools).

However, post-reasoning inherently has limitations. On one hand, it is prone to model hallucinations; on the other, many visual tasks cannot be precisely described in words.

Additionally, visual generation models like NanoBanana excel in multimodal generation, but generation ability does not equal reasoning ability. The "thinking" before generation still relies on language models, not native reasoning capability.

To develop models that truly understand the spatial, structural, and relational complexities of the visual world, disruptive innovation at the underlying technology level is necessary.

So, how to innovate? Elorian AI's founders, with years of experience in the multimodal field, approach this by deeply integrating multimodal training with a new architecture specifically designed for multimodal reasoning. They abandon the traditional approach of treating images as static input, instead training models to directly interact with and manipulate visual representations to autonomously parse their structure, relationships, and physical constraints.

Of course, another core element is data, which is crucial to the performance and success of these models.

Andrew Dai stated that they place great importance on data quality, data mix ratios, data sources, and data diversity. They have innovated at the data layer, reconstructing the reasoning chain in visual space, and are extensively and deeply using synthetic data.

Combined, these efforts will give rise to new AI systems that move beyond simple visual "perception" to high-level visual "reasoning."

This AI system could be a visual reasoning foundation model: building a highly general but exceptionally proficient model in a specific capability set—visual reasoning.

As a general foundation model, its application areas should be broad.

First, in the robotics field, it could become the underlying neural center of powerful systems,赋予ing them the ability to operate autonomously in various unfamiliar environments.

For example, sending a robot to handle a sudden safety fault in a hazardous environment requires the robot to make quick and accurate instant decisions. If the robot lacks a foundation model with deep reasoning capabilities, people wouldn't dare let it randomly press buttons or operate levers. But if it has strong reasoning能力, it might think: "Before operating this panel, maybe I should pull this lever first to activate the safety mechanism."

Furthermore, in disaster management, models with visual reasoning could analyze satellite images to monitor and prevent forest fires. In engineering, they could accurately understand complex visual blueprints and system diagrams. The significance of this ability lies in the fact that the operating principles of the physical world are fundamentally different from the pure code world. You can't design an airplane wing just by typing a few lines of pure code.

However, Elorian AI's models and capabilities are currently still on paper. They plan to release a model in 2026 that achieves SOTA level in visual reasoning. At that time, we can verify if their results match their claims.

When AI Truly Possesses "Visual Reasoning" Ability, How Will It Change the Physical World?

To enable AI to understand and influence the real physical world, technology has iterated several times.

From image recognition in the traditional CV era, to image generation models/multimodal models in generative AI, to world models, the understanding of the physical world has been continuously enhanced.

Visual reasoning foundation models could take it a step further. Because achieving visual reasoning allows AI to understand the physical world more deeply, thereby achieving a higher level of machine intelligence.

Imagine, when models with deep understanding and fine operation empower the embodied intelligence industry and the AI hardware industry, it will greatly expand their application scope. For example, robots could perform more reliable industrial production or work in medical care; AI hardware, especially wearable devices, could become smarter personal assistants.

However, underlying these technologies is still data. As Andrew Dai mentioned earlier, data quality, data mix ratios, data sources, and data diversity all determine model performance.

In the physical AI field, Chinese companies, whether at the model level or the data level, are closer to world leadership compared to text large models. If they can leverage their advantages of richer data and application scenarios to accelerate iteration speed, then whether in embodied intelligence or AI hardware, whether applied in industry, healthcare, or homes, there is a greater opportunity to reach leading levels and potentially produce world-class enterprises.

Perguntas relacionadas

QWhat is the main goal of current Vision Language Models (VLMs) according to the article, and what are their limitations?

AThe main goal of VLMs is to process visual input by first converting it into language and then performing text-based reasoning. Their limitation is that many visual tasks cannot be accurately described with text, leading to poor visual reasoning capabilities.

QWho are the founders of Elorian AI and what are their backgrounds?

AThe founders are Andrew Dai, a former Google DeepMind researcher with 14 years of experience, and Yinfei Yang, an AI expert who worked at Google Research and Apple. Andrew Dai contributed to foundational papers in language model pre-training and MoE architecture, while Yinfei Yang focused on multimodal representation learning.

QHow does Elorian AI plan to improve AI's visual reasoning capabilities?

AElorian AI aims to develop a native multimodal model that processes text, images, video, and audio simultaneously. They focus on integrating multimodal training with new architectures designed for visual reasoning, directly interacting with visual representations to parse structures and physical constraints, and using high-quality, diverse synthetic data.

QWhat potential applications are mentioned for AI with advanced visual reasoning skills?

AApplications include robotics for autonomous operations in unfamiliar environments, disaster management through satellite image analysis, engineering by interpreting complex visual diagrams, and enhancing AI hardware like wearable devices for personal assistance.

QWhen does Elorian AI plan to release their model, and what is the expected achievement?

AElorian AI plans to release a model in 2026 that achieves state-of-the-art (SOTA) performance in visual reasoning, aiming to elevate capabilities from 'child-level' to 'adult-level'.

Leituras Relacionadas

Yao Shunyu's 88 Days

Yao Shunyu, a 27-year-old AI expert with a background from Princeton and OpenAI, joined Tencent in September 2025. Within 88 days, he led a major overhaul of Tencent’s AI strategy and organization, resulting in the release of Hunyuan Hy3 preview—a MoE model with 295B total parameters and 21B active parameters, supporting up to 256K context length. The launch came after Tencent leadership, including CEO Ma Huateng and President Martin Lau, openly criticized Hunyuan's earlier underperformance—citing slow development, over-reliance on superficial benchmark optimization, and poor generalization in real-world applications. Internal adoption was low, with key business units like WeChat and gaming seeking external AI solutions. Yao reshaped Tencent’s AI approach by integrating previously siloed teams, dissolving the ten-year-old Tencent AI Lab, and establishing new units focused on AI infrastructure and data. Hy3 preview was developed using co-design principles, closely aligned with product teams to ensure practical usability from the start. It has already been integrated into core products like Yuanbao, QQ, and enterprise tools. The release signals a shift from chasing rankings to building usable, scalable AI grounded in Tencent’s ecosystem. While external partnerships (like with DeepSeek and OpenClaw) helped retain users temporarily, the focus is now on making Hunyuan a reliable internal foundation. The real test lies in sustaining this new organizational momentum amid fierce competition from Alibaba, DeepSeek, and others.

marsbitHá 51m

Yao Shunyu's 88 Days

marsbitHá 51m

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

DeepSeek, an AI startup founded by Liang Wenfeng, is reportedly in talks with Alibaba and Tencent for an external funding round that could value the company at over $20 billion. This marks a significant shift, as DeepSeek had previously relied solely on funding from its parent company,幻方量化 (Huanfang Quantitative), and had resisted external investment. The potential valuation would place DeepSeek among the top-tier AI model companies in China, comparable to competitors like MoonDark (valued at ~$18 billion) and ahead of recently listed firms like MiniMax and Zhipu. The funding—which could range from $600 million (for a 3% stake) to $2 billion (for 10%)—is seen as a move to secure resources for model development, retain talent, and support infrastructure needs, particularly as competition in inference models and AI agents intensifies. Both Alibaba and Tencent are eager to invest, not only for financial returns but also to integrate DeepSeek into their broader AI ecosystems. However, DeepSeek’s leadership is cautious about maintaining independence and may prefer financial investors over strategic ones to avoid being locked into a specific tech ecosystem. Alternative options, such as state-backed funds, offer longer-term capital and policy support but may come with slower decision-making and potential constraints on global expansion. With competing AI firms accelerating their IPO plans, DeepSeek’s window for securing optimal terms may be narrowing. The final decision will reflect a trade-off between capital, resources, and strategic independence.

marsbitHá 2h

20 Billion Valuation, Alibaba and Tencent Competing to Invest, Whose Money Will Liang Wenfeng Take?

marsbitHá 2h

Trading

Spot
Futuros

Artigos em Destaque

Como comprar AR

Bem-vindo à HTX.com!Tornámos a compra de Arweave (AR) simples e conveniente.Segue o nosso guia passo a passo para iniciar a tua jornada no mundo das criptos.Passo 1: cria a tua conta HTXUtiliza o teu e-mail ou número de telefone para te inscreveres numa conta gratuita na HTX.Desfruta de um processo de inscrição sem complicações e desbloqueia todas as funcionalidades.Obter a minha contaPasso 2: vai para Comprar Cripto e escolhe o teu método de pagamentoCartão de crédito/débito: usa o teu visa ou mastercard para comprar Arweave (AR) instantaneamente.Saldo: usa os fundos da tua conta HTX para transacionar sem problemas.Terceiros: adicionamos métodos de pagamento populares, como Google Pay e Apple Pay, para aumentar a conveniência.P2P: transaciona diretamente com outros utilizadores na HTX.Mercado de balcão (OTC): oferecemos serviços personalizados e taxas de câmbio competitivas para os traders.Passo 3: armazena teu Arweave (AR)Depois de comprar o teu Arweave (AR), armazena-o na tua conta HTX.Alternativamente, podes enviá-lo para outro lugar através de transferência blockchain ou usá-lo para transacionar outras criptomoedas.Passo 4: transaciona Arweave (AR)Transaciona facilmente Arweave (AR) no mercado à vista da HTX.Acede simplesmente à tua conta, seleciona o teu par de trading, executa as tuas transações e monitoriza em tempo real.Oferecemos uma experiência de fácil utilização tanto para principiantes como para traders experientes.

639 Visualizações TotaisPublicado em {updateTime}Atualizado em 2025.03.21

Como comprar AR

Discussões

Bem-vindo à Comunidade HTX. Aqui, pode manter-se informado sobre os mais recentes desenvolvimentos da plataforma e obter acesso a análises profissionais de mercado. As opiniões dos utilizadores sobre o preço de AR (AR) são apresentadas abaixo.

活动图片