Skip to main content
ModelLLMsv70B

Chinchilla

by Google DeepMind · paid · Last verified 2026-03-17

Chinchilla is DeepMind's 70 billion parameter language model that introduced the landmark 'Chinchilla scaling laws' showing that most large language models are significantly undertrained relative to their compute budget. By training on 1.4 trillion tokens with only 70B parameters, Chinchilla outperformed the 280B Gopher model while using the same compute, reshaping how the entire field approaches LLM training.

https://arxiv.org/abs/2203.15556
C+
C+Average
Adoption: CQuality: AFreshness: DCitations: A+Engagement: F

Specifications

License
Proprietary
Pricing
paid
Capabilities
text-generation, question-answering, language-understanding, few-shot-learning
Integrations
Use Cases
LLM scaling research, compute-optimal training studies, NLP benchmark evaluation, AI policy research
API Available
No
Parameters
~70B
Context Window
2K
Modalities
text
Training Cutoff
2022
Tags
foundational, deepmind, scaling-laws, compute-optimal, historical
Added
2026-03-17
Completeness
100%

Index Score

56
Adoption
40
Quality
85
Freshness
30
Citations
92
Engagement
0

Explore the full AI ecosystem on Agents as a Service