MATH Dataset
by UC Berkeley · open-source · Last verified 2026-03-17
A challenging benchmark of 12,500 competition mathematics problems from AMC, AIME, and similar competitions across 5 difficulty levels and 7 subjects. Each problem includes a full step-by-step solution in LaTeX, making it suitable for both evaluation and training of mathematical reasoning.
https://huggingface.co/datasets/hendrycks/competition_math ↗B+
B+—Good
Adoption: AQuality: A+Freshness: B+Citations: A+Engagement: F
Specifications
- License
- MIT
- Pricing
- open-source
- Capabilities
- math-evaluation, advanced-reasoning-benchmark, step-by-step-solutions
- Integrations
- huggingface-datasets, lm-eval-harness
- Use Cases
- model-evaluation, advanced-math-reasoning, mathematical-training
- API Available
- No
- Tags
- benchmark, competition-math, hard-math, step-by-step, latex
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
77.3Adoption
88
Quality
93
Freshness
72
Citations
94
Engagement
0
Put AI to work for your business
Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.