The Chinese AI firm DeepSeek has unveiled its latest model, V4, which boasts a much longer context window and lower costs compared to its predecessors. This new version can handle 1 million tokens, making it comparable to leading closed-source models on key benchmarks.
DeepSeek claims that the performance of V4-Pro rivals top models from Anthropic and OpenAI at a fraction of the cost, with V4-Flash being even cheaper. The model is optimized for popular agent frameworks like Claude Code and OpenClaw, making it an attractive choice for developers working on complex coding tasks.
The key innovation in V4 lies in its memory efficiency. By compressing older information and focusing on the parts most likely to matter in the present moment, DeepSeek has reduced computing power and memory use by significant margins. This could make it cheaper to build tools that need to work across huge amounts of material.
While DeepSeek’s return to cutting-edge models comes after months of scrutiny, its big release is a win for both developers and companies using the tech. V4 offers frontier AI capabilities on their own terms without worrying about skyrocketing costs.







