Mixtral 8x22B
by Mistral AI · free · Last verified 2026-03-17
Mixtral 8x22B is a large-scale, open-source Mixture-of-Experts (MoE) model from Mistral AI. It features 176 billion total parameters but only activates 39 billion per token, balancing immense power with efficiency. The model excels at reasoning, code generation, and multilingual tasks, and includes native function calling capabilities.
https://mistral.ai ↗B
B—Above Average
Adoption: B+Quality: AFreshness: C+Citations: B+Engagement: F
Specifications
- License
- Apache 2.0
- Pricing
- free
- Capabilities
- text-generation, code-generation, complex-reasoning, native-function-calling, multilingual-processing, mathematical-problem-solving, instruction-following, summarization, data-extraction
- Integrations
- Hugging Face Transformers, vLLM, LangChain, LlamaIndex, AWS Sagemaker, Google Cloud Vertex AI, Microsoft Azure ML
- Use Cases
- [object Object], [object Object], [object Object], [object Object], [object Object]
- API Available
- Yes
- Parameters
- 176B (39B active)
- Context Window
- 64K tokens
- Modalities
- text
- Training Cutoff
- Mid 2024
- Tags
- llm, open-source, moe, mistral-ai, large-language-model, code-generation, function-calling, reasoning, multilingual, self-hosting, apache-2.0
- Added
- 2026-03-17
- Completeness
- 0.95%
Index Score
64.4Adoption
72
Quality
83
Freshness
55
Citations
76
Engagement
0