Skip to main content
ModelLLMsvDeepSeek-V2-Chat

DeepSeek-V2

by DeepSeek · open-source · Last verified 2026-03-17

DeepSeek's mixture-of-experts model introducing Multi-head Latent Attention (MLA) for dramatically reduced inference cost. Activates 21B of its 236B total parameters per token while matching larger dense models.

https://www.deepseek.com
B
BAbove Average
Adoption: BQuality: B+Freshness: C+Citations: B+Engagement: F

Specifications

License
DeepSeek License
Pricing
open-source
Capabilities
text-generation, code-generation, reasoning, multilingual, efficient-inference
Integrations
huggingface, vllm, ollama
Use Cases
cost-efficient-inference, chatbots, code-generation, research, enterprise-deployment
API Available
Yes
Parameters
236B (21B active)
Context Window
128K tokens
Modalities
text
Training Cutoff
Early 2024
Tags
llm, open-source, moe, cost-efficient, multi-head-latent-attention, deepseek
Added
2026-03-17
Completeness
100%

Index Score

60.8
Adoption
68
Quality
78
Freshness
55
Citations
72
Engagement
0

Explore the full AI ecosystem on Agents as a Service