Skip to main content
BenchmarkLLMsv1.0

BIG-Bench Hard

by Google DeepMind · open-source · Last verified 2026-03-01

Curated subset of 23 challenging BIG-Bench tasks where prior language models performed below average human raters. Specifically designed to test tasks that benefit significantly from chain-of-thought prompting and multi-step reasoning.

https://github.com/suzgunmirac/BIG-Bench-Hard
B+
B+Good
Adoption: AQuality: AFreshness: B+Citations: AEngagement: F

Specifications

License
Apache-2.0
Pricing
open-source
Capabilities
model-evaluation, hard-task-testing, reasoning-assessment
Integrations
lm-eval-harness
Use Cases
frontier-model-comparison, reasoning-evaluation, chain-of-thought-assessment
API Available
No
Evaluated Models
claude-4, gpt-5, gemini-2.5-pro, deepseek-v3, llama-4-405b
Metrics
accuracy, cot-accuracy
Methodology
23 tasks from BIG-Bench selected for difficulty. Evaluated with both direct answering and chain-of-thought prompting to measure reasoning improvement.
Last Run
2026-02-05
Tags
benchmark, evaluation, reasoning, hard-tasks, chain-of-thought
Added
2026-03-17
Completeness
100%

Index Score

70.1
Adoption
80
Quality
88
Freshness
78
Citations
82
Engagement
0

Explore the full AI ecosystem on Agents as a Service