Skip to main content
ModelLLMsvMixtral-8x7B-Instruct-v0.1

Mixtral 8x7B

by Mistral AI · open-source · Last verified 2026-03-17

Mistral AI's sparse mixture-of-experts model using 8 expert networks of 7B parameters each, activating only 2 per token. Matches GPT-3.5 performance while using a fraction of the compute at inference.

https://mistral.ai
B
BAbove Average
Adoption: AQuality: B+Freshness: C+Citations: AEngagement: F

Specifications

License
Apache 2.0
Pricing
open-source
Capabilities
text-generation, code-generation, reasoning, multilingual, efficient-inference
Integrations
huggingface, ollama, vllm, together-ai, aws-bedrock, groq
Use Cases
enterprise-deployment, chatbots, code-generation, translation, analysis
API Available
Yes
Parameters
46.7B (12.9B active)
Context Window
32K tokens
Modalities
text
Training Cutoff
Early 2024
Tags
llm, open-source, moe, efficient, sparse, mistral
Added
2026-03-17
Completeness
100%

Index Score

69.4
Adoption
82
Quality
78
Freshness
50
Citations
84
Engagement
0

Explore the full AI ecosystem on Agents as a Service