Skip to main content
BenchmarkAI Agentsv1.0

WebArena

by CMU · open-source · Last verified 2026-03-01

Realistic web environment benchmark testing agents' ability to complete complex tasks across self-hosted websites including e-commerce, forums, collaborative development, and content management platforms with real web interfaces.

https://webarena.dev
B
BAbove Average
Adoption: BQuality: A+Freshness: ACitations: B+Engagement: F

Specifications

License
Apache-2.0
Pricing
open-source
Capabilities
agent-evaluation, web-interaction-testing, browser-automation-assessment
Integrations
playwright, docker
Use Cases
web-agent-benchmarking, browser-automation-evaluation, interactive-agent-testing
API Available
No
Evaluated Models
claude-4, gpt-5, gemini-2.5-pro, deepseek-v3
Metrics
success-rate, step-accuracy
Methodology
812 web-based tasks across 5 self-hosted websites. Agents interact via browser actions and are evaluated on task completion determined by URL, page content, or database state checks.
Last Run
2026-02-28
Tags
benchmark, evaluation, agents, web, browser-automation
Added
2026-03-17
Completeness
100%

Index Score

62.4
Adoption
66
Quality
90
Freshness
86
Citations
72
Engagement
0

Explore the full AI ecosystem on Agents as a Service