Skip to main content
brand
context
industry
strategy
AaaS
Benchmarkbenchmarks-evaluationv1.0

SWE-bench

by Princeton NLP · free · Last verified 2026-04-24

SWE-bench is a benchmark for evaluating AI systems' ability to resolve real GitHub issues from popular Python repositories. Each instance requires understanding a codebase, identifying the bug, and producing a correct patch. SWE-bench Verified is the curated subset accepted as the standard for coding agent evaluation by the AI industry.

https://swe-bench.github.io
C
CBelow Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F

Specifications

License
Proprietary
Pricing
free
Capabilities
Integrations
Use Cases
API Available
No
Evaluated Models
claude-4, gpt-5, gemini-2.5-pro, deepseek-v3
Metrics
resolve-rate, pass@1, patch-accuracy
Methodology
Real GitHub issues with validated test patches. Models must produce code patches that pass repository test suites in isolated Docker environments.
Last Run
2026-02-28
Tags
benchmark, coding, software-engineering, github-issues, agents, python
Added
2026-04-24
Completeness
60%

Index Score

44
Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service