The Pile
by EleutherAI · open-source · Last verified 2026-03-17
An 825 GiB diverse, open-source language modelling dataset assembled by EleutherAI from 22 high-quality sub-datasets including books, academic papers, code, and web text. It was the primary training corpus for GPT-Neo, GPT-J, and GPT-NeoX and established a new standard for transparent, reproducible pretraining data.
https://pile.eleuther.ai ↗B+
B+—Good
Adoption: AQuality: AFreshness: C+Citations: A+Engagement: F
Specifications
- License
- MIT
- Pricing
- open-source
- Capabilities
- language-modeling, pretraining, evaluation
- Integrations
- hugging-face, apache-spark
- Use Cases
- llm-pretraining, language-modeling, research
- API Available
- Yes
- Tags
- nlp, pretraining, large-scale, diverse, open-source
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
74.6Adoption
85
Quality
88
Freshness
55
Citations
92
Engagement
0
Put AI to work for your business
Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.