# Сопутствующие статьи по теме Open Source

Новостной центр HTX предлагает последние статьи и углубленный анализ по "Open Source", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Jensen Huang Announces 8 New Products in 1.5 Hours, NVIDIA Fully Bets on AI Inference and Physical AI

NVIDIA CEO Jensen Huang unveiled eight major announcements during his CES 2026 keynote, focusing on advancing AI inference and physical AI technologies. The centerpiece was the NVIDIA Vera Rubin POD AI supercomputer, which integrates six custom chips—Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-X CPO—designed for协同 performance. The Rubin GPU offers 5x higher inference and 3.5x higher training performance than Blackwell, with support for HBM4 memory. The Vera Rubin NVL72 system delivers 3.6 EFLOPS in NVFP4 inference performance in a single rack, with enhanced memory bandwidth. NVIDIA also introduced the Spectrum-X Ethernet CPO for improved power efficiency, a推理上下文内存存储平台 to optimize KV cache storage and reduce recomputation, and the DGX SuperPOD based on Rubin architecture, cutting token costs for large MoE models to 1/10. On the software side, NVIDIA expanded its open-source offerings, including new models and datasets, and emphasized the rise of physical AI. The company open-sourced the Alpha-Mayo model for autonomous driving, enabling reasoning-based decision-making, and announced production-ready NVIDIA DRIVE platforms for Mercedes-Benz. Partnerships with Siemens and robotics firms like Boston Dynamics were highlighted, underscoring NVIDIA’s full-stack approach to AI infrastructure and real-world AI applications.

marsbit01/06 04:36

Jensen Huang Announces 8 New Products in 1.5 Hours, NVIDIA Fully Bets on AI Inference and Physical AI

marsbit01/06 04:36

Racing to Be the First Stock: The Substance, Capabilities, and Ambition of China's Largest Independent Model Company

Zhipu AI, China's largest independent large language model (LLM) company by revenue, has passed its listing hearing on the Hong Kong Stock Exchange with a valuation of RMB 24.377 billion. Its IPO filing provides the first clear look at the financials of a major Chinese LLM player. From 2022 to 2024, Zhipu's revenue grew at a 130% CAGR, reaching RMB 310 million in 2024. Nearly 85% of its revenue comes from on-premise model deployments for enterprise clients, with the remainder from its MaaS (Model-as-a-Service) platform. Despite rapid revenue growth, the company reported significant adjusted net losses, driven overwhelmingly by R&D expenses which reached RMB 1.59 billion in H1 2025. A major portion of these costs is attributed to computing power, essential for training its flagship models. A key part of Zhipu's strategy is a "land and expand" approach: using strategic price cuts on its MaaS platform to attract a large user base (over 1.2 million enterprise developers) and then converting them into high-value on-premise clients. The release of its powerful open-source base model, GLM-4.5/4.6, which ranks among the top global models in several benchmarks, led to an exponential increase in API calls and token consumption. The company is betting that continued heavy R&D investment is necessary to stay at the forefront of the intensely competitive global AI market. Its leadership believes that possessing a superior base model is the ultimate product and the key to long-term growth, even if it requires substantial short-term losses. As one of the first Chinese LLM firms to file for an IPO, Zhipu's market debut is poised to be a major test for valuing China's independent AI industry.

marsbit12/23 11:13

Racing to Be the First Stock: The Substance, Capabilities, and Ambition of China's Largest Independent Model Company

marsbit12/23 11:13

Lighthouses Guide the Way, Torches Claim Sovereignty: A Hidden War Over AI Allocation Rights

The article "Lighthouse Guides Direction, Torch Fights for Sovereignty: A Hidden War Over AI Allocation" by Zhixiong Pan examines the underlying power struggle in AI development, moving beyond superficial metrics like model size and performance rankings. It identifies two coexisting paradigms: the "Lighthouse," representing state-of-the-art (SOTA), centralized AI systems controlled by tech giants like OpenAI and Google, which push cognitive boundaries but are resource-intensive and create dependency risks; and the "Torch," symbolizing open-source, locally deployable models (e.g., DeepSeek, Mistral) that democratize access, ensure data sovereignty, and enable private, customizable AI assets. The Lighthouse drives innovation and sets technical directions but poses risks in accessibility, control, and single-point failures. The Torch, while shifting security and responsibility to users, offers resilience, cost stability, and compliance for critical applications in sectors like healthcare and finance. The interplay between these models forms a symbiotic relationship: Lighthouses expand capabilities, while Torches disseminate and stabilize these advances, collectively elevating AI’s baseline. Ultimately, the conflict is over AI allocation rights—defining default intelligence, managing externalities, and determining individual control. A dual strategy—using Lighthouses for frontier tasks and Torches for private, reliable deployment—is proposed as the pragmatic path forward, balancing extreme capability with broad, sovereign access. The true measure of the AI era lies not in raw power but in whether individuals possess "a light they don’t have to borrow from anyone."

marsbit12/22 11:13

Lighthouses Guide the Way, Torches Claim Sovereignty: A Hidden War Over AI Allocation Rights

marsbit12/22 11:13

活动图片