HumanEval
by OpenAI · free · Last verified 2026-04-24
HumanEval is OpenAI's code generation benchmark consisting of 164 hand-written Python programming problems with unit tests. It measures a model's ability to generate syntactically correct and functionally complete code from docstring descriptions. HumanEval is the foundational coding benchmark that all subsequent code benchmarks build upon.
https://github.com/openai/human-eval ↗C
C—Below Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F
Specifications
- License
- Proprietary
- Pricing
- free
- Capabilities
- Integrations
- Use Cases
- API Available
- No
- Evaluated Models
- claude-4, gpt-5, gemini-2.5-pro, deepseek-v3, llama-4-405b
- Metrics
- pass@1, pass@10
- Methodology
- 164 hand-written Python programming problems. Models generate function implementations evaluated by executing test cases. Pass@k measures probability of at least one correct solution in k samples.
- Last Run
- 2026-02-15
- Tags
- benchmark, coding, python, code-generation, openai, unit-tests
- Added
- 2026-04-24
- Completeness
- 60%
Index Score
44Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0