TruthfulQA Dataset
by University of Oxford · open-source · Last verified 2026-03-17
TruthfulQA measures the truthfulness of LLMs across 817 adversarially crafted questions spanning 38 categories where humans are commonly misled by false beliefs. Models are scored on generating truthful AND informative answers, revealing how larger models can paradoxically become more confidently wrong.
https://huggingface.co/datasets/truthful_qa ↗B+
B+—Good
Adoption: AQuality: AFreshness: B+Citations: A+Engagement: F
Specifications
- License
- Apache-2.0
- Pricing
- open-source
- Capabilities
- truthfulness-evaluation, hallucination-detection, factual-accuracy
- Integrations
- huggingface-datasets, lm-eval-harness
- Use Cases
- model-evaluation, hallucination-research, alignment-testing
- API Available
- No
- Tags
- benchmark, truthfulness, hallucination, factual-accuracy, adversarial
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
75.1Adoption
87
Quality
89
Freshness
71
Citations
90
Engagement
0
Put AI to work for your business
Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.