Illustrating the Capital Market After DeepSeek V4's Launch: Zhipu and MiniMax Plunge, NVIDIA Panics

marsbit發佈於 2026-04-24更新於 2026-04-24

文章摘要

DeepSeek V4, a 1T parameter MoE model with a 285B Flash version, has been fully open-sourced under Apache 2.0, triggering significant reactions across capital markets. Chinese AI chipmakers like Cambricon and Hygon saw major stock gains, with Cambricon rising 60% monthly. In contrast, Hong Kong-listed AI firms Zhipu and MiniMax dropped over 7%, facing heavy short-selling. NVIDIA’s shares dipped, with analysts noting a "decoupling" of Chinese and North American AI inference demand. The launch intensified competition in the AI model space, following 11 major releases in 30 days, including GPT-5.5 and Llama 4. Unlike others, V4’s permissive licensing and full open-source release challenged closed-source models on performance, cost, and accessibility. Critically, V4 announced Day-0 support for domestic chips like Huawei’s Ascend 950PR and Cambricon’s Siyuan 590, offering better cost-performance than NVIDIA counterparts. This shift reduces reliance on CUDA, aligning with NVIDIA CEO’s earlier concerns about Chinese AI chips threatening its dominance. The move signals a tangible step in China’s AI supply chain independence, redirecting compute demand to local manufacturers like Hua Hong Semiconductor.

DeepSeek V4 is finally live. This is a moment that has been awaited for nearly five months. The main model with 1T MoE parameters + the 285B parameter Flash version, followed by the full 1.6T Pro version, all open-sourced on GitHub under the Apache 2.0 license, with weights and deployment code released simultaneously.

As soon as the model was released, the capital market responded in three distinct yet interconnected ways.

Different Reactions in the Capital Market

On the A-share computing power chain side, there was an almost across-the-board surge. Cambricon saw 11 consecutive days of gains, rising 3.7% in a single day, with a cumulative increase of over 60% within the month. Hygon Information hit a 10% daily limit during trading, closing up 8.4%. SMIC's A-shares rose 4.91%, while its Hong Kong shares climbed 8.81%. Huahong's Hong Kong shares surged as much as 18% before closing up 12%. The Cathay Pacific ETF for科创芯片 (Science and Technology Innovation Chip) attracted 2.4 billion yuan in a single day, reaching a historic high in scale.

On the Hong Kong stock market, large model companies showed a different color. Zhipu (02513.HK) fell 8.07%, with a short-selling ratio of 9.9%. MiniMax (00100.HK) dropped 7.40%, with its short-selling ratio soaring to 22.87%. The latter represents the highest single-day short-selling data for the Hong Kong AI sector in the past three months. Both companies are representatives of the Hong Kong AI listing wave expected in the second half of 2025, with their IPO prospectuses highlighting the same core competency: "self-developed foundational large models."

The reaction on the other side of the Pacific was equally specific. NVIDIA opened down 1.8% last night, falling as much as 2.6% during the session, and closed flat for the day. Bloomberg's market commentary compared this consolidation to the V3 "DeepSeek moment" on January 27. The difference is that the January episode was a panic sell-off, wiping out $600 billion in market value in a single day. This time, it was more like a repricing—milder in scale but clear in direction. A new phrase appeared in buy-side research notes: "China's AI inference demand is beginning to decouple from North America's AI inference demand."

Layering these three market reactions together, we get the first verdict written by the market within 24 hours of V4's launch. After open source prevailed, money began to reposition. What can be priced is no longer the model itself, but which card the model runs on and which supply chain it is embedded in.

11 New Models in 30 Days: V4 Adds Fuel to the Open-Source Camp

The timing of V4's release is part of the reason why the reaction was amplified.

Zooming out to the past 30 days: between March 26 and April 24, at least 11 significantly influential large models were released or received major updates, covering almost all major players. The list includes Anthropic Opus 4.6, Google Gemini 3.1 Pro, OpenAI GPT-5.5, Mistral Large 3, Meta Llama 4, Moonlight's Kimi K2.6, Alibaba Qwen3-Next, ByteDance Doubao 2.5 Pro, Tencent Hunyuan 3.0, Kimi K2.6 Plus, and finally, DeepSeek V4, released in the early hours of April 23.

On average, a new model was released every 2.7 days. This is a pace even fund managers can't keep up with in reading release notes. But looking at the K-lines of AI assets in China and Hong Kong over these 30 days, only one name left a lasting mark on the market. GPT-5.5 on April 8 drove NVIDIA up 4.2% in a single day, peaking that day. Then, DeepSeek V4 on April 23-24 drove consecutive jumps in the China-Hong Kong computing power chain.

The difference does not lie in the model capabilities themselves. The gap between these 11 models on the LMArena leaderboard is mostly within 50 points, falling within a narrow band of the "same tier." The difference lies in the叠加 (superposition) of two things.

The first is open source. Among the first 10 models, only Llama 4 was open source, but Llama 4's weight协议 (license) came with a long list of commercial use restrictions, receiving冷淡 (lukewarm)评价 (evaluations) from the欧美 (European and American) developer community, and it fell out of the top ten on OpenRouter on the third day. V4's license is Apache 2.0, with no门槛 (barriers) for weights, no restrictions on commercial use, and推理代码 (inference code) released simultaneously. This is the first flagship open-source model in the past six months to simultaneously pressure the closed-source camp on three dimensions: performance, price, and openness.

The second is timing. Against the backdrop of the closed-source camp continuously releasing major updates, the open-source narrative is being repeatedly squeezed. Opus 4.6 pushed the SWE-Bench for code tasks to a new high, and GPT-5.5 set a下沉锚点 (downward anchor point) of $1.25 per million tokens. The debate over whether open source can catch up with closed source has been ongoing in Silicon Valley for two years. V4, with an open-source flagship whose estimated MAU surged to 90 million, pressed the pause button on this debate.

As one large domestic fund manager stated in a roadshow, "Before V4, we applied a discount to the valuation of open-source large models. After V4, this discount is starting to be收 (collected) in reverse."

DeepSeek Replaced the Pricing Table of the Computing Power Supply Chain

V4's release notes contained a line that had never appeared in any official document of a Chinese large model before: "Day 0 full-stack adaptation for Cambricon Siyuan 590 and Huawei Ascend 950PR, with deployment code open-sourced simultaneously." The weight of this line becomes clear only when three暗线 (undercurrents) that have been unfolding in parallel over the past 12 months are connected. These three undercurrents belong to hardware, software, and Silicon Valley's reaction, respectively.

The first undercurrent is on the chip side. Huawei's Ascend 950PR entered mass production in December 2025, with FP4算力 (computing power) of 1.56 PFLOPS and HBM capacity of 112GB, marking the first time domestic AI chips have matched NVIDIA's B-series on hard metrics. In推理任务 (inference tasks) for a 1T parameter MoE model like V4, single-card吞吐 (throughput) increased by 2.87 times compared to the H20. The配套 (supporting) CANN 8.0 software stack optimizes the LLM inference framework down to the算子级别 (operator level). DeepSeek's公开 (public) Benchmark shows that V4's end-to-end inference latency on an Ascend超节点 (super node) (8-card 950PR) is 35% lower than on an equivalent-scale H100 cluster. Cambricon Siyuan 590's data is even more aggressive, with single-chip FP8算力 (computing power)对标 (matching) the H100, at less than half the price.

The second undercurrent is on the software side. The vLLM mainline merged the Cambricon MLU backend PR on April 22, marking the first time an open-source inference framework natively supports non-NVIDIA domestic GPUs. Hygon Information's DCU takes another path through the ROCm ecosystem but can fully run V4's MoE routing layer. This means that deploying V4 is no longer "only runnable on a specific domestic card" but "choosable among multiple domestic cards." The ecosystem's dependence on a single supplier is broken, which is a critical拐点 (inflection point) for production.

The third undercurrent comes from Silicon Valley. On April 15, Jensen Huang's (NVIDIA CEO) was pressed by an analyst at TSMC's earnings call about the progress of China's domestic computing power. His原话 (original words) were冷峻而具体 (chillingly specific): "If they can really make LLMs摆脱 (break free from) CUDA, it would be a disaster for us." Nine days later, DeepSeek provided the answer with a single Day 0 announcement.

The phrase "国产替代" (domestic substitution) has been overused to the point of losing meaning over the past three years. But after the morning of April 24, this matter gained specific data that can be priced by the capital market for the first time. Single-card throughput, end-to-end inference latency, inference cost, and commercially deployable code quietly pushed this long war of words past the threshold into production.

The logic behind Cambricon's 11 consecutive阳线 (rising days) is hidden here. It is no longer a "domestic GPU concept stock" but a "DeepSeek V4推理基础设施供应商" (inference infrastructure supplier). The same logic explains Huahong's 12% surge in Hong Kong shares; it代工 (manufactures) the 7nm equivalent process for the 950PR. Every V4 token running on a domestic Ascend card means that capacity originally destined for NVIDIA and TSMC is partially截留 (retained) in the Pearl River Delta.

And the next step has long been laid out. In Huawei's roadmap, the 950DT (training version) is scheduled for delivery in Q4 2026, targeting "full-stack training of V5 or equivalent models on a 10,000-card cluster." If this path can be successfully traversed, CUDA's moat in the training side of China's large models will be downgraded from "necessary" to "optional."

相關問答

QWhat were the immediate market reactions to the release of DeepSeek V4?

AThe A-share computing power chain stocks surged, with Cambricon rising 60% monthly, Haiguang Information hitting a 10% limit up, and SMIC and Huahong also seeing significant gains. In contrast,港股大模型 companies like Zhipu and MiniMax saw sharp declines of over 7%, with high short-selling ratios. NVIDIA opened down 1.8% but closed flat, indicating a recalibration rather than panic selling.

QHow does DeepSeek V4's open-source approach differ from other major models released in the same period?

ADeepSeek V4 is released under the Apache 2.0 license, offering unrestricted commercial use, full weight access, and synchronized inference code. This contrasts with models like Llama 4, which had restrictive commercial clauses, and other closed-source models from Anthropic, Google, and OpenAI, making V4 the first flagship open-source model to pressure the closed-source camp on performance, price, and openness simultaneously.

QWhat specific hardware adaptations did DeepSeek V4 announce, and why are they significant?

ADeepSeek V4 announced Day 0 full-stack adaptation for Cambricon's Siyuan 590 and Huawei's Ascend 950PR, with deployment code open-sourced. This is significant as it marks the first time a Chinese LLM can natively run on multiple domestic GPUs, breaking dependency on single suppliers like NVIDIA, and providing concrete data on performance gains, such as higher throughput and lower latency compared to H100 clusters.

QHow did the release timing of DeepSeek V4 amplify its market impact?

AV4 was released amidst a crowded period of 11 major model releases or updates in 30 days, including from OpenAI, Google, and Anthropic. However, only V4 and GPT-5.5 had sustained market effects. V4's impact was magnified because it provided a high-performance, fully open-source alternative just as the closed-source narrative was dominating, leading to a reevaluation of open-source model valuations.

QWhat long-term implications does DeepSeek V4's success have for NVIDIA and the global AI supply chain?

AV4's success signals a potential decoupling of Chinese AI inference demand from North American supply, as it enables production-grade deployment on domestic hardware like Huawei Ascend and Cambricon chips. This could reduce reliance on NVIDIA's CUDA ecosystem and divert manufacturing capacity from TSMC to local foundries like Huahong, threatening NVIDIA's market dominance in China and prompting a strategic shift in global AI supply chains.

你可能也喜歡

交易

現貨
合約

熱門文章

什麼是 $S$

理解 SPERO:全面概述 SPERO 簡介 隨著創新領域的不斷演變,web3 技術和加密貨幣項目的出現在塑造數字未來中扮演著關鍵角色。在這個動態領域中,SPERO(標記為 SPERO,$$s$)是一個引起關注的項目。本文旨在收集並呈現有關 SPERO 的詳細信息,以幫助愛好者和投資者理解其基礎、目標和在 web3 和加密領域內的創新。 SPERO,$$s$ 是什麼? SPERO,$$s$ 是加密空間中的一個獨特項目,旨在利用去中心化和區塊鏈技術的原則,創建一個促進參與、實用性和金融包容性的生態系統。該項目旨在以新的方式促進點對點互動,為用戶提供創新的金融解決方案和服務。 SPERO,$$s$ 的核心目標是通過提供增強用戶體驗的工具和平台來賦能個人。這包括使交易方式更加靈活、促進社區驅動的倡議,以及通過去中心化應用程序(dApps)創造金融機會的途徑。SPERO,$$s$ 的基本願景圍繞包容性展開,旨在彌合傳統金融中的差距,同時利用區塊鏈技術的優勢。 誰是 SPERO,$$s$ 的創建者? SPERO,$$s$ 的創建者身份仍然有些模糊,因為公開可用的資源對其創始人提供的詳細背景信息有限。這種缺乏透明度可能源於該項目對去中心化的承諾——這是一種許多 web3 項目所共享的精神,優先考慮集體貢獻而非個人認可。 通過將討論重心放在社區及其共同目標上,SPERO,$$s$ 體現了賦能的本質,而不特別突出某些個體。因此,理解 SPERO 的精神和使命比識別單一創建者更為重要。 誰是 SPERO,$$s$ 的投資者? SPERO,$$s$ 得到了來自風險投資家到天使投資者的多樣化投資者的支持,他們致力於促進加密領域的創新。這些投資者的關注點通常與 SPERO 的使命一致——優先考慮那些承諾社會技術進步、金融包容性和去中心化治理的項目。 這些投資者通常對不僅提供創新產品,還對區塊鏈社區及其生態系統做出積極貢獻的項目感興趣。這些投資者的支持強化了 SPERO,$$s$ 作為快速發展的加密項目領域中的一個重要競爭者。 SPERO,$$s$ 如何運作? SPERO,$$s$ 採用多面向的框架,使其與傳統的加密貨幣項目區別開來。以下是一些突顯其獨特性和創新的關鍵特徵: 去中心化治理:SPERO,$$s$ 整合了去中心化治理模型,賦予用戶積極參與決策過程的權力,關於項目的未來。這種方法促進了社區成員之間的擁有感和責任感。 代幣實用性:SPERO,$$s$ 使用其自己的加密貨幣代幣,旨在在生態系統內部提供多種功能。這些代幣使交易、獎勵和平台上提供的服務得以促進,增強了整體參與度和實用性。 分層架構:SPERO,$$s$ 的技術架構支持模塊化和可擴展性,允許在項目發展過程中無縫整合額外的功能和應用。這種適應性對於在不斷變化的加密環境中保持相關性至關重要。 社區參與:該項目強調社區驅動的倡議,採用激勵合作和反饋的機制。通過培養強大的社區,SPERO,$$s$ 能夠更好地滿足用戶需求並適應市場趨勢。 專注於包容性:通過提供低交易費用和用戶友好的界面,SPERO,$$s$ 旨在吸引多樣化的用戶群體,包括那些以前可能未曾參與加密領域的個體。這種對包容性的承諾與其通過可及性賦能的總體使命相一致。 SPERO,$$s$ 的時間線 理解一個項目的歷史提供了對其發展軌跡和里程碑的關鍵見解。以下是建議的時間線,映射 SPERO,$$s$ 演變中的重要事件: 概念化和構思階段:形成 SPERO,$$s$ 基礎的初步想法被提出,與區塊鏈行業內的去中心化和社區聚焦原則密切相關。 項目白皮書的發布:在概念階段之後,發布了一份全面的白皮書,詳細說明了 SPERO,$$s$ 的願景、目標和技術基礎設施,以吸引社區的興趣和反饋。 社區建設和早期參與:積極進行外展工作,建立早期採用者和潛在投資者的社區,促進圍繞項目目標的討論並獲得支持。 代幣生成事件:SPERO,$$s$ 進行了一次代幣生成事件(TGE),向早期支持者分發其原生代幣,並在生態系統內建立初步流動性。 首次 dApp 上線:與 SPERO,$$s$ 相關的第一個去中心化應用程序(dApp)上線,允許用戶參與平台的核心功能。 持續發展和夥伴關係:對項目產品的持續更新和增強,包括與區塊鏈領域其他參與者的戰略夥伴關係,使 SPERO,$$s$ 成為加密市場中一個具有競爭力和不斷演變的參與者。 結論 SPERO,$$s$ 是 web3 和加密貨幣潛力的見證,能夠徹底改變金融系統並賦能個人。憑藉對去中心化治理、社區參與和創新設計功能的承諾,它為更具包容性的金融環境鋪平了道路。 與任何在快速發展的加密領域中的投資一樣,潛在的投資者和用戶都被鼓勵進行徹底研究,並對 SPERO,$$s$ 的持續發展進行深思熟慮的參與。該項目展示了加密行業的創新精神,邀請人們進一步探索其無數可能性。儘管 SPERO,$$s$ 的旅程仍在展開,但其基礎原則確實可能影響我們在互聯網數字生態系統中如何與技術、金融和彼此互動的未來。

85 人學過發佈於 2024.12.17更新於 2024.12.17

什麼是 $S$

什麼是 AGENT S

Agent S:Web3中自主互動的未來 介紹 在不斷演變的Web3和加密貨幣領域,創新不斷重新定義個人如何與數字平台互動。Agent S是一個開創性的項目,承諾通過其開放的代理框架徹底改變人機互動。Agent S旨在簡化複雜任務,為人工智能(AI)提供變革性的應用,鋪平自主互動的道路。本詳細探索將深入研究該項目的複雜性、其獨特特徵以及對加密貨幣領域的影響。 什麼是Agent S? Agent S是一個突破性的開放代理框架,專門設計用來解決計算機任務自動化中的三個基本挑戰: 獲取特定領域知識:該框架智能地從各種外部知識來源和內部經驗中學習。這種雙重方法使其能夠建立豐富的特定領域知識庫,提升其在任務執行中的表現。 長期任務規劃:Agent S採用經驗增強的分層規劃,這是一種戰略方法,可以有效地分解和執行複雜任務。此特徵顯著提升了其高效和有效地管理多個子任務的能力。 處理動態、不均勻的界面:該項目引入了代理-計算機界面(ACI),這是一種創新的解決方案,增強了代理和用戶之間的互動。利用多模態大型語言模型(MLLMs),Agent S能夠無縫導航和操作各種圖形用戶界面。 通過這些開創性特徵,Agent S提供了一個強大的框架,解決了自動化人機互動中涉及的複雜性,為AI及其他領域的無數應用奠定了基礎。 誰是Agent S的創建者? 儘管Agent S的概念根本上是創新的,但有關其創建者的具體信息仍然難以捉摸。創建者目前尚不清楚,這突顯了該項目的初期階段或戰略選擇將創始成員保密。無論是否匿名,重點仍然在於框架的能力和潛力。 誰是Agent S的投資者? 由於Agent S在加密生態系統中相對較新,關於其投資者和財務支持者的詳細信息並未明確記錄。缺乏對支持該項目的投資基礎或組織的公開見解,引發了對其資金結構和發展路線圖的質疑。了解其支持背景對於評估該項目的可持續性和潛在市場影響至關重要。 Agent S如何運作? Agent S的核心是尖端技術,使其能夠在多種環境中有效運作。其運營模型圍繞幾個關鍵特徵構建: 類人計算機互動:該框架提供先進的AI規劃,力求使與計算機的互動更加直觀。通過模仿人類在任務執行中的行為,承諾提升用戶體驗。 敘事記憶:用於利用高級經驗,Agent S利用敘事記憶來跟蹤任務歷史,從而增強其決策過程。 情節記憶:此特徵為用戶提供逐步指導,使框架能夠在任務展開時提供上下文支持。 支持OpenACI:Agent S能夠在本地運行,使用戶能夠控制其互動和工作流程,與Web3的去中心化理念相一致。 與外部API的輕鬆集成:其多功能性和與各種AI平台的兼容性確保了Agent S能夠無縫融入現有技術生態系統,成為開發者和組織的理想選擇。 這些功能共同促成了Agent S在加密領域的獨特地位,因為它以最小的人類干預自動化複雜的多步任務。隨著項目的發展,其在Web3中的潛在應用可能重新定義數字互動的展開方式。 Agent S的時間線 Agent S的發展和里程碑可以用一個時間線來概括,突顯其重要事件: 2024年9月27日:Agent S的概念在一篇名為《一個像人類一樣使用計算機的開放代理框架》的綜合研究論文中推出,展示了該項目的基礎工作。 2024年10月10日:該研究論文在arXiv上公開,提供了對框架及其基於OSWorld基準的性能評估的深入探索。 2024年10月12日:發布了一個視頻演示,提供了對Agent S能力和特徵的視覺洞察,進一步吸引潛在用戶和投資者。 這些時間線上的標記不僅展示了Agent S的進展,還表明了其對透明度和社區參與的承諾。 有關Agent S的要點 隨著Agent S框架的持續演變,幾個關鍵特徵脫穎而出,強調其創新性和潛力: 創新框架:旨在提供類似人類互動的直觀計算機使用,Agent S為任務自動化帶來了新穎的方法。 自主互動:通過GUI自主與計算機互動的能力標誌著向更智能和高效的計算解決方案邁進了一步。 複雜任務自動化:憑藉其強大的方法論,能夠自動化複雜的多步任務,使過程更快且更少出錯。 持續改進:學習機制使Agent S能夠從過去的經驗中改進,不斷提升其性能和效率。 多功能性:其在OSWorld和WindowsAgentArena等不同操作環境中的適應性確保了它能夠服務於廣泛的應用。 隨著Agent S在Web3和加密領域中的定位,其增強互動能力和自動化過程的潛力標誌著AI技術的一次重大進步。通過其創新框架,Agent S展現了數字互動的未來,為各行各業的用戶承諾提供更無縫和高效的體驗。 結論 Agent S代表了AI與Web3結合的一次大膽飛躍,具有重新定義我們與技術互動方式的能力。儘管仍處於早期階段,但其應用的可能性廣泛且引人入勝。通過其全面的框架解決關鍵挑戰,Agent S旨在將自主互動帶到數字體驗的最前沿。隨著我們深入加密貨幣和去中心化的領域,像Agent S這樣的項目無疑將在塑造技術和人機協作的未來中發揮關鍵作用。

688 人學過發佈於 2025.01.14更新於 2025.01.14

什麼是 AGENT S

如何購買S

歡迎來到HTX.com!在這裡,購買Sonic (S)變得簡單而便捷。跟隨我們的逐步指南,放心開始您的加密貨幣之旅。第一步:創建您的HTX帳戶使用您的 Email、手機號碼在HTX註冊一個免費帳戶。體驗無憂的註冊過程並解鎖所有平台功能。立即註冊第二步:前往買幣頁面,選擇您的支付方式信用卡/金融卡購買:使用您的Visa或Mastercard即時購買Sonic (S)。餘額購買:使用您HTX帳戶餘額中的資金進行無縫交易。第三方購買:探索諸如Google Pay或Apple Pay等流行支付方式以增加便利性。C2C購買:在HTX平台上直接與其他用戶交易。HTX 場外交易 (OTC) 購買:為大量交易者提供個性化服務和競爭性匯率。第三步:存儲您的Sonic (S)購買Sonic (S)後,將其存儲在您的HTX帳戶中。您也可以透過區塊鏈轉帳將其發送到其他地址或者用於交易其他加密貨幣。第四步:交易Sonic (S)在HTX的現貨市場輕鬆交易Sonic (S)。前往您的帳戶,選擇交易對,執行交易,並即時監控。HTX為初學者和經驗豐富的交易者提供了友好的用戶體驗。

1.4k 人學過發佈於 2025.01.15更新於 2025.03.21

如何購買S

相關討論

歡迎來到 HTX 社群。在這裡,您可以了解最新的平台發展動態並獲得專業的市場意見。 以下是用戶對 S (S)幣價的意見。

活动图片