# Model Related Articles

HTX News Center provides the latest articles and in-depth analysis on "Model", covering market trends, project updates, tech developments, and regulatory policies in the crypto industry.

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

Summary: In a remarkable validation of Chinese AI research, Meta and METR have independently reached conclusions that align perfectly with the "Density Law" proposed by a Tsinghua University and FaceWall Intelligent team two years ago. Published in Nature Machine Intelligence in late 2025, the law states that the computational power required to achieve a specific level of AI performance halves every 3.5 months. This convergence was starkly evident in April 2026. METR reported that AI capabilities are doubling every 88.6 days, while Meta's new model, Muse Spark, demonstrated it could match the performance of a model from the previous year using less than one-tenth of the training compute. When plotted, the growth curves from all three sources—using different metrics (parameters, compute, task length)—show an almost identical exponential slope. The findings have profound implications: AI inference costs are collapsing faster than anticipated, powerful edge-computing AI is becoming rapidly feasible, and the industry's strategy of simply scaling model size is becoming economically inefficient. The Chinese team, which has been building its "MiniCPM" model series based on this law since 2024, is seen as having a significant two-year lead in practical engineering experience, marking a rare instance where Chinese researchers pioneered a fundamental predictive trend in AI.

marsbit04/13 12:14

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

marsbit04/13 12:14

Claiming the "Happy Horse": Alibaba's AI Lays Out the "Eight Trigrams Formation"

Alibaba has officially claimed the "HappyHorse" (HappyHorse-1.0) AI video generation model, which recently topped the global benchmark on Artificial Analysis with an Elo score of 1357. Developed by Alibaba’s ATH (Alibaba Token Hub) innovation unit, the model is notable for its ability to generate high-definition video with synchronized audio and sound effects from text input, significantly improving motion coherence and reducing production time and cost. This launch is part of a broader acceleration in Alibaba’s AI strategy. In late March and early April, the company released three flagship models in quick succession: Qwen3.5-Omni, Wan2.7-Image, and Qwen3.6-Plus. The latter broke global daily call volume records with 1.4 trillion tokens processed shortly after release. Alibaba has also undergone significant organizational restructuring to support its AI ambitions. In March, it established the ATH business group, led by CEO Wu Yongming, to integrate AI development, cloud services, and application deployment. Further changes in April included forming a group-level technology committee and consolidating the Tongyi Lab into a dedicated AI model division. The company is investing heavily in AI, with plans to spend over 380 billion RMB on cloud and AI infrastructure over three years. Its self-developed GPUs have already seen mass production. While the market has responded positively to these moves, challenges remain in balancing centralized control with operational flexibility and maintaining team stability amid rapid changes.

marsbit04/11 04:07

Claiming the "Happy Horse": Alibaba's AI Lays Out the "Eight Trigrams Formation"

marsbit04/11 04:07

Meeting at the Pinnacle of Generalist: 30 Billion in 30 Days, What Did Qianxun AI Do Right?

Qianxun Intelligence, a Chinese embodied AI and robotics startup, completed two major funding rounds totaling 3 billion RMB within 30 days in early 2026, backed by prominent investors including Shunwei Capital (Lei Jun) and Yunfeng Capital (Jack Ma). Founded in January 2024 by a team with expertise in robotics, AI, and commercialization, the company focuses on developing general-purpose embodied AI models. Its open-source model, Spirit v1.5, surpassed competitors in performance benchmarks, demonstrating strong zero-shot generalization capabilities for complex tasks. The company follows a scaling law approach similar to large language models (LLMs), leveraging massive diverse datasets—including internet videos, wearable device data, and teleoperation data—to train its Vision-Language-Action (VLA) model. Qianxun employs a multi-source data engine, collecting over 200,000 hours of real-world interaction data, with plans to reach 1 million hours by 2026. It uses low-cost wearable devices for efficient data acquisition and emphasizes real-world deployment for continuous data feedback. The company has deployed robots like "Xiao Mo" in industrial settings (e.g., battery production lines for CATL) and commercial scenarios (e.g., as baristas in JD.com malls), using operational data to refine its models. This "commercialize while iterating" strategy supports both revenue generation and model improvement, positioning Qianxun to compete globally in embodied AI.

marsbit04/07 04:05

Meeting at the Pinnacle of Generalist: 30 Billion in 30 Days, What Did Qianxun AI Do Right?

marsbit04/07 04:05

Just 6 Days After Launching ChatGPT Health, OpenAI Is Surpassed on Its Own Medical Benchmark

In a significant development in the AI healthcare sector, Baichuan Intelligence has surpassed OpenAI's GPT-5.2 High on the HealthBench benchmark—a medical evaluation dataset created by OpenAI with input from 260+ doctors across 60 countries—just six days after OpenAI launched ChatGPT Health. Baichuan's new model, Baichuan-M3, achieved a top score of 65.1 and also led in the more challenging HealthBench Hard subset, while demonstrating the lowest hallucination rate (3.5%) without relying on external tools. Key to M3’s performance is its Fact Aware RL technique, which improves diagnostic accuracy by balancing factual precision with proactive questioning. The model avoids both over-confident errors and overly vague responses. Additionally, Baichuan introduced SCAN-bench, a new evaluation framework designed to simulate real doctor-patient interactions. In tests, M3 outperformed human specialists in areas like safety stratification, clarity, and diagnostic questioning, partly due to its ability to integrate knowledge across medical disciplines. Baichuan is now rolling out the model via its consumer product Baixiaoying (百小应), offering tailored interfaces for both doctors and patients. The company emphasizes a focus on "serious medicine," prioritizing complex areas like oncology over general wellness, aiming to augment—not just assist—medical professionals. According to CEO Wang Xiaochuan, enhancing AI’s capability in high-stakes medical scenarios is crucial for building user trust and advancing toward AGI through deeper biological understanding.

marsbit01/14 02:31

Just 6 Days After Launching ChatGPT Health, OpenAI Is Surpassed on Its Own Medical Benchmark

marsbit01/14 02:31

活动图片