DeepSeek Logo
DeepSeek-V2: 236 billion MoE model. Leading performance. Ultra-affordable. Unparalleled experience. Chat and API upgraded to the latest model.
Brand new experience, redefining possibilities
中文
DeepSeek-V2 Capabilities

DeepSeek-V2 delivers impressive results on current major large model leaderboards.

Places top 3 in AlignBench
Surpassing GPT-4 and close to GPT-4-Turbo
Ranks top-tier in MT-Bench
Rivaling LLaMA3-70B and outperforming Mixtral 8x22B
Specializes in math, code and reasoning
The open-source model supports 128K context length, while the Chat/API supports 32K context length
Open sourceChinese GeneralEnglish GeneralKnowledgeArithmeticMathReasoningCoding
AlignBenchMT-BenchMMLUGSM8KMATHBBHHumanEval
DeepSeek-V2Yes7.918.9777.892.253.979.781.1
GPT-4-Turbo-1106-8.019.3284.693.064.1-82.2
GPT-4-0613-7.538.9686.492.052.983.184.1
GPT-3.5-6.088.2170.057.134.166.648.1
Gemini1.5 Pro-7.338.9381.991.758.584.071.9
Claude3 Opus-7.629.0086.895.061.086.884.9
Claude3 Sonnet-6.708.4779.092.340.582.973.0
Claude3 Haiku-6.428.3975.288.940.973.775.9
abab-6.5-7.978.8279.591.751.482.078.0
abab-6.5s-7.348.6974.687.342.076.868.3
ERNIE-4.0-7.897.69-91.352.2-72.0
GLM-4-7.888.6081.587.647.982.372.0
Moonshot-v1-7.228.59-89.544.2-82.9
Baichuan 3--8.7081.788.249.284.570.1
Qwen1.5 72BYes7.198.6176.281.940.665.968.9
LLaMA 3 70BYes7.428.9580.393.248.580.176.2
Mixtral 8x22BYes6.498.6677.887.949.878.475.0
DeepSeek-V2 API Pricing
Per Million Input Tokens
0.14$
Per Million Output Tokens
0.28$
Chinese Performance vs. API Price

Elites in AlignBench, DeepSeek-V2's performance is in the top tier globally with unbeatable API pricing.

Why DeepSeek-V2?
API Access
236B parameters
32K context (Chat/API)
Capable
$0.14/M input tokens
$0.28/M output tokens
Cost-effective
Compatible with
OpenAI API
Seamless