Posted in

DeepSeek-V4, the Brand New Chinese AI model adapted for Huawei chips

Another step in China’s efforts to create a self-sufficient AI ecosystem was taken on Friday when Chinese startup DeepSeek unveiled a preview version of V4, their new AI model tailored to operate on Huawei chips.

Here’s what we currently know about the eagerly anticipated open-source product.

V4 MODEL CHARACTERISTICS

According to DeepSeek, V4 is made to interact with agent frameworks like OpenClaw and Claude Code, which reflects the industry’s move away from prompt-based chatbots and toward models that can accomplish intricate, multi-step tasks with less human involvement.

V4 is available in two versions: the more costly and powerful Pro and the lighter, less priced Flash.

Pro is marketed as a higher-end model that performs on par with top closed-source systems, especially in competitive programming, agentic coding, world knowledge, and STEM (science, technology, engineering, and mathematics).

According to a DeepSeek paper published with the model, “DeepSeek-V4-Pro Max … redefines the state of the art for open models, outperforming its predecessors in core tasks,” Pro outperforms all open-source models in maximum reasoning mode, though it still lags behind frontier closed-source systems like Google’s Gemini ⁠3.1 Pro and OpenAI’s GPT-5.4 in some areas.

While Flash is faster and less expensive than Pro, it has less world knowledge and performs worse on more difficult agent-based tasks. Nevertheless, it provides comparable reasoning capabilities in some domains.

A 1-million-token context frame is supported by both versions, which corresponds to the expansion DeepSeek unveiled with V3 in February. The architecture of V4 is intended to lower computation and memory costs for long-context use, according to DeepSeek.

ADAPTED FOR ​HUAWEI CHIPS

A key change from earlier DeepSeek releases is that V4 was adapted for Huawei’s most advanced Ascend AI chips.

Reuters reported in ​February that DeepSeek had not shared its new model with U.S. chipmakers for performance tuning, instead granting early access to domestic companies such as ‌Huawei, despite ⁠previously working closely with Nvidia’s technical staff.

Hours after the preview release, Huawei said V4 is fully supported on its Ascend 950-based supernode clusters, and that its chips were used for part of V4-Flash’s training.

“Through close technical collaboration … the entire Ascend supernode product line now supports the DeepSeek-V4 series models,” Huawei said.

DeepSeek’s earlier V3 and R1 models were trained on Nvidia chips. The company did not say ​whether the same applied to V4.

SELF-SUFFICIENCY ​PUSH AND LIMITS

Lian Jye Su, ⁠chief analyst at tech research firm Omdia, said the partnership shows DeepSeek models can deliver similar performance on both Huawei and Nvidia hardware.

“The popularity of DeepSeek in the domestic Chinese market encouraged Huawei ​to optimize the model for its hardware, and this, in turn, lowers the barriers for Chinese ​developers and companies ⁠to build AI apps entirely on domestic solutions,” he said.

He added that Huawei still trails Nvidia technologically, and moving developers away from Nvidia’s ecosystem remains difficult. Even so, he said, “DeepSeek’s pivot reveals real, tangible progress toward AI infrastructure self-sufficiency.”

DeepSeek also faces compute constraints under U.S. export ⁠controls on ​Nvidia chips and chipmaking equipment. The company said Pro can cost up to ​12 times more than Flash because of “constraints in high-end compute capacity,” limiting current Pro service availability.

DeepSeek said Pro pricing could fall sharply once Huawei Ascend 950 supernodes are ​deployed at scale in the second half of the year.

Leave a Reply

Your email address will not be published. Required fields are marked *