# Сопутствующие статьи по теме Open Source

Новостной центр HTX предлагает последние статьи и углубленный анализ по "Open Source", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit05/10 08:09

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit05/10 08:09

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit04/26 00:27

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit04/26 00:27

DeepSeek No Longer Wants to Focus Only on Large Models

DeepSeek, a leading Chinese AI company, has released its new model series DeepSeek-V4, featuring two versions: the high-performance V4-Pro with 1.6 trillion parameters and the cost-efficient V4-Flash. Both support 1 million token context windows and use Mixture-of-Experts (MoE) architecture to improve efficiency. The company continues its strategy of offering competitive pricing, with input tokens priced as low as ¥0.2 per million tokens. A key revelation is DeepSeek’s explicit link between future price reductions and the mass availability of Huawei’s Ascend 950 AI chips in the second half of the year. This signals a strategic shift from relying solely on algorithmic and engineering optimizations to integrating domestic computing power into its core cost structure. DeepSeek has adapted its inference system to run efficiently on both NVIDIA GPUs and Huawei NPUs, potentially challenging NVIDIA's CUDA ecosystem dominance. Concurrently, DeepSeek is reportedly seeking significant external investment, with a pre-money valuation of around ¥300 billion. This move highlights growing pressures in scaling compute infrastructure, retaining top talent—amid recent departures of key researchers—and accelerating commercialization efforts. The company has also updated its consumer app with tiered model access, indicating a stronger product focus. The V4 release underscores that China's AI competition is evolving beyond pure model capability into a broader contest involving compute supply chains, engineering systems, financing, and talent strategy.

marsbit04/25 01:45

DeepSeek No Longer Wants to Focus Only on Large Models

marsbit04/25 01:45

活动图片