Skip to main content
brand
context
industry
strategy
AaaS
Benchmarkbenchmarks-evaluationv1.0

HumanEval

by OpenAI · free · Last verified 2026-04-24

HumanEval is OpenAI's code generation benchmark consisting of 164 hand-written Python programming problems with unit tests. It measures a model's ability to generate syntactically correct and functionally complete code from docstring descriptions. HumanEval is the foundational coding benchmark that all subsequent code benchmarks build upon.

https://github.com/openai/human-eval
C
CBelow Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F

Specifications

License
Proprietary
Pricing
free
Capabilities
Integrations
Use Cases
API Available
No
Evaluated Models
claude-4, gpt-5, gemini-2.5-pro, deepseek-v3, llama-4-405b
Metrics
pass@1, pass@10
Methodology
164 hand-written Python programming problems. Models generate function implementations evaluated by executing test cases. Pass@k measures probability of at least one correct solution in k samples.
Last Run
2026-02-15
Tags
benchmark, coding, python, code-generation, openai, unit-tests
Added
2026-04-24
Completeness
60%

Index Score

44
Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service