Skip to main content
Datasetbenchmarksv1.0

MMLU Dataset

by UC Berkeley · open-source · Last verified 2026-03-17

Massive Multitask Language Understanding (MMLU) is a benchmark covering 57 academic subjects from STEM to humanities, with 14,000+ multiple-choice questions at undergraduate and professional level. It has become the de facto standard for measuring broad world knowledge and academic reasoning in LLMs.

https://huggingface.co/datasets/cais/mmlu
A
AGreat
Adoption: A+Quality: A+Freshness: B+Citations: A+Engagement: F

Specifications

License
MIT
Pricing
open-source
Capabilities
knowledge-evaluation, benchmark, multiple-choice-qa
Integrations
huggingface-datasets, lm-eval-harness
Use Cases
model-evaluation, benchmarking, knowledge-testing
API Available
No
Tags
benchmark, multiple-choice, knowledge, 57-subjects, academic
Added
2026-03-17
Completeness
100%

Index Score

80.9
Adoption
96
Quality
90
Freshness
75
Citations
98
Engagement
0

Put AI to work for your business

Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service