DeepSeek V4 Finally Released, Breaking the Strongest Closed-Source Monopoly, Explicitly Partnering with Huawei Chips

marsbitОпубліковано о 2026-04-24Востаннє оновлено о 2026-04-24

Анотація

DeepSeek AI has officially released DeepSeek-V4, available in two versions: the high-performance **DeepSeek-V4-Pro** (49B activated parameters, 1.6T total) and the more efficient **DeepSeek-V4-Flash** (13B activated parameters, 284B total). Both support a 1M context length, making long-context capability a baseline feature rather than a premium offering. The Pro version rivals top closed-source models in agent capabilities, world knowledge, and reasoning performance. It outperforms Claude Sonnet 4.5 in agentic coding and approaches Claude Opus 4.6 (non-thinking mode) in quality. The Flash version offers competitive performance at a lower cost, though it lags in highly complex tasks. A key technical innovation is a new attention mechanism that reduces computational and memory demands for long contexts. The models are optimized for agent frameworks like Claude Code and OpenClaw. API services are available with support for both OpenAI and Anthropic-style interfaces. DeepSeek also announced upcoming support for Huawei’s computing hardware in the second half of the year. The models are open-sourced on Hugging Face and ModelScope.

Just now, DeepSeek-V4 is here!

The preview version is officially launched and simultaneously open-sourced.

There are two versions in total:

DeepSeek-V4-Pro: Comparable to top closed-source models, 1.6T, 49B activated, 1M context length;

DeepSeek-V4-Flash: A smaller and faster economical version, 284B, 13B activated, 1M context length.

The official statement is: It leads domestically and in the open-source field in Agent capabilities, world knowledge, and reasoning performance.

And:

Currently, DeepSeek-V4 has become the Agentic Coding model used by company employees. According to evaluation feedback, the user experience is better than Sonnet 4.5, and the delivery quality is close to Opus 4.6 non-thinking mode. However, there is still a certain gap compared to the Opus 4.6 thinking model.

Currently, both the official website and the app have been updated, and the API service has also been synchronized.

Regarding the much-concerned domestic computing power, the key point is: Support for Huawei computing power in the second half of the year.

Top-Tier and Cost-Effective Choices, Two Versions Launched Together

This time, V4 releases two versions at once.

V4-Pro, performance comparable to top closed-source models.

The official judgment has three points:

Significantly improved Agent capabilities: In the Agentic Coding evaluation, V4-Pro has reached the best level among current open-source models and also performed excellently in other Agent-related evaluations. In internal evaluations, in Agent Coding mode, the V4 experience is better than Sonnet 4.5, and the delivery quality is close to Opus 4.6 non-thinking mode, but there is still a certain gap compared to the Opus 4.6 thinking mode.

Rich world knowledge: In world knowledge evaluations, DeepSeek-V4-Pro significantly leads other open-source models, only slightly inferior to the top closed-source model Gemini-Pro-3.1.

World-class reasoning performance: In evaluations of mathematics, STEM, and competitive code, DeepSeek-V4-Pro surpasses all currently publicly evaluated open-source models and achieves excellent results comparable to the world's top closed-source models.

V4-Flash, a smaller and faster economical version. Reasoning ability is close to Pro, world knowledge reserve is slightly inferior, but with smaller parameters and activation, and cheaper API.

In Agent tasks, DeepSeek-V4-Flash is on par with DeepSeek-V4-Pro in simple tasks, but there is still a gap in high-difficulty tasks.

In the car wash test, V4 also passed quickly.

In the classic biological scenario "Desperate Father," DeepSeek-V4 did not immediately grasp the key point of red-green color blindness in one round (according to genetic rules, if a female is red-green color blind, her biological father must be as well).

Million Context Length Becomes Standard

It is worth mentioning that from today, 1M context length is standard for all DeepSeek official services.

A year ago, 1M context length was Gemini's exclusive trump card; all other closed-source models were either 128K or 200K; on the open-source side, almost no one could afford this level.

DeepSeek directly moved the million context length from a "high-end feature" to "basic infrastructure."

And it's open source. How did they do it? The release directly gave the answer—

V4 has created a new attention mechanism that compresses at the token dimension and is used in combination with DSA sparse attention. Compared to traditional methods, the demand for computation and memory is significantly reduced.

DSA is not a new term. It was first introduced in the V3.2-Exp update half a year ago. At that time, external attention was low because the benchmark scores were almost the same as V3.1-Terminus, making it seem like an insignificant intermediate version.

Looking back now, that was the foundation of V4.

Special Optimization for Agent Capabilities

On the Agent side, V4 has been adapted and optimized for mainstream Agent products such as Claude Code, OpenClaw, OpenCode, CodeBuddy, etc., with improvements in code tasks and document generation tasks.

The release also included an example of a PPT inner page generated by V4-Pro under a certain Agent framework.

API Pricing

On the API side, V4-Pro and V4-Flash are simultaneously launched, supporting both OpenAI ChatCompletions interface and Anthropic interface.

The base_url remains unchanged, just change the model parameter to deepseek-v4-pro or deepseek-v4-flash to call.

Both versions have a maximum context length of 1M and support both non-thinking mode and thinking mode. In thinking mode, the intensity can be adjusted through the reasoning_effort parameter, with two levels: high and max. The official recommendation is to directly use max for complex Agent scenarios.

Here is a key point—Support for Huawei computing power in the second half of the year.

In addition, old model names will be discontinued.

deepseek-chat and deepseek-reasoner will be discontinued three months later (July 24, 2026). During the current phase, these two names point to the non-thinking and thinking modes of V4-Flash, respectively.

It has little impact on individual developers; just change one model parameter. Companies with production environments need to migrate during these three months.

One more thing

At the end of the release, DeepSeek quoted a sentence.

"Not tempted by praise, not frightened by slander, follow the path and act,端正自己端正自己 (correct oneself)."

This is a sentence from Xunzi's "Non-Twelve Masters." The literal meaning is: not tempted by praise, not frightened by slander, move forward according to the path one believes in, and correct oneself.

In today's context, it's somewhat interesting.

Over the past six months, rumors about when V4 would be released, whether it was delayed, whether it had been surpassed by others, whether it had been compromised by Claude's distilled data, etc., have circulated back and forth in both Chinese and English AI circles. At the beginning of the year, some even confidently said that V4 would be released before the Spring Festival, but it wasn't until the end of April.

They never responded once.

Then, on a Friday afternoon, they released V4, simultaneously open-sourced it, simultaneously launched it on the official website and app, simultaneously updated the API, and even wrote into the release that internal employees have already abandoned Claude.

No roadmap, no live stream, no interviews.

The four words "率道而行" (follow the path and act) sound like a slogan. But if you look at the path over the past six months: the V3.2 "unremarkable" Exp version, the DSA sparse attention that paved the way for V4 for half a year, and the path of making 1M context length from a trump card to a standard feature.

DeepSeek has already done it.

DeepSeek-V4 model open-source links:

[1]https://huggingface.co/collections/deepseek-ai/deepseek-v4

[2]https://modelscope.cn/collections/deepseek-ai/DeepSeek-V4

DeepSeek-V4 Technical Report: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf

This article is from the WeChat public account "QbitAI", author: QbitAI

Пов'язані питання

QWhat are the two versions of DeepSeek-V4 that were released, and what are their key specifications?

ADeepSeek-V4 was released in two versions: DeepSeek-V4-Pro and DeepSeek-V4-Flash. The Pro version has 1.6T parameters with 49B activated and a 1M context length. The Flash version is a smaller, faster, and more economical model with 284B parameters, 13B activated, and also a 1M context length.

QAccording to the article, how does DeepSeek-V4-Pro's performance compare to top closed-source models like Anthropic's Opus 4.6?

AAccording to internal evaluations, DeepSeek-V4-Pro's performance in Agent Coding mode is better than Sonnet 4.5 and its delivery quality is close to Opus 4.6 in non-thinking mode, but it still has a gap compared to Opus 4.6 in thinking mode.

QWhat major technical achievement is highlighted for the DeepSeek-V4 models regarding context length?

AA major technical achievement is that a 1M context length has become the standard for all DeepSeek official services. This was achieved through a novel attention mechanism that compresses at the token dimension and is combined with DSA sparse attention, significantly reducing computational and memory requirements.

QWhat significant partnership or hardware support is announced for the future of DeepSeek's models?

AThe article announces that DeepSeek will support Huawei's computing power in the second half of the year.

QWhere can users find the open-source models and the technical report for DeepSeek-V4?

AThe open-source models can be found on Hugging Face and ModelScope collections under 'deepseek-ai/deepseek-v4'. The technical report is available at: https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf

Пов'язані матеріали

Kicked Out of PayPal, Musk Aims for a Comeback in the Crypto Market

Elon Musk's X (formerly Twitter) has launched its "Smart Cashtags" feature, generating approximately $1 billion in trading volume within days of its April 2026 pilot launch. The feature allows users to click on stock or crypto tickers (or even full Solana token contract addresses) in posts to view real-time price charts and discussions without leaving the app. Initially available to iPhone users in the US and Canada, with a partnership in Canada enabling direct trading via the Wealthsimple app. This move is part of Musk's broader "Everything App" vision, spearheaded by the upcoming X Money platform. Analysts, such as Mizuho's Dan Dolev, see this as a potential disruptor to the US payments market, even prompting a downgrade of PayPal's stock. X Money's beta offers services like 6% APY on deposits, cashback, and P2P transfers, with speculation it may later incorporate crypto trading and stablecoin settlements for faster transactions. However, the ambitious plan faces significant regulatory scrutiny. Senator Elizabeth Warren has questioned the sustainability of the high 6% yield and raised concerns over X's banking partner, Cross River Bank, which has a history of regulatory violations. Additional risks involve the "GENIUS Act," which may create loopholes for stablecoin issuance without full FDIC insurance coverage, potentially leaving users unprotected. The integration of social trading on a platform with over 500 million users could inject new liquidity and retail interest into the crypto market. Yet, it also amplifies risks like herd mentality and the blurring of lines between entertainment and financial speculation. Musk's return to finance, after his ouster from PayPal, hinges on balancing innovation with regulatory compliance.

marsbit1 год тому

Kicked Out of PayPal, Musk Aims for a Comeback in the Crypto Market

marsbit1 год тому

Торгівля

Спот
Ф'ючерси
活动图片