BookCorpus
by University of Toronto · open-source · Last verified 2026-03-17
A dataset of over 11,000 unpublished books spanning fiction and non-fiction genres, originally scraped from Smashwords and used as the primary pretraining corpus for BERT alongside Wikipedia. It provides rich long-range dependency data that helps models learn coherent narrative structure and extended discourse patterns.
https://huggingface.co/datasets/bookcorpus ↗B+
B+—Good
Adoption: AQuality: AFreshness: DCitations: A+Engagement: F
Specifications
- License
- Custom
- Pricing
- open-source
- Capabilities
- language-modeling, pretraining, long-range-understanding
- Integrations
- hugging-face, tensorflow-datasets
- Use Cases
- llm-pretraining, transfer-learning, research
- API Available
- Yes
- Tags
- nlp, books, long-form, pretraining, bert
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
71.3Adoption
82
Quality
80
Freshness
30
Citations
90
Engagement
0
Put AI to work for your business
Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.