Exploring Physical World AGI with "Visual Reasoning", ElorianAI Raises $55 Million
ElorianAI, co-founded by ex-Google AI expert Andrew Dai and former AI specialist Yinfei Yang, has raised $55 million in early funding to develop next-generation AI systems with advanced visual reasoning capabilities. While current large models excel in text-based tasks like programming and math, they perform poorly in visual reasoning—even top models like Gemini only match a 3-year-old’s ability in basic visual benchmarks.
The key limitation lies in the architecture of current vision-language models (VLMs), which first convert visual inputs into text before reasoning, losing critical spatial and structural information. ElorianAI aims to build a native multimodal model that processes and reasons directly in visual space, enabling deeper understanding of physical relationships, constraints, and environments.
The company plans to release a state-of-the-art visual reasoning model by 2026, with potential applications in robotics, disaster management, engineering, healthcare, and AI hardware. By using high-quality, diverse, and synthetically generated data, ElorianAI intends to create models that don’t just perceive but truly understand and reason about the physical world—bringing us closer to visual AGI.
marsbit1 год тому