AgentBench: Evaluating LLMs as Agents
by Tsinghua University · free · Last verified 2026-03-17
Introduces AgentBench, the first systematic benchmark for evaluating LLMs as autonomous agents across eight distinct environments spanning operating systems, databases, knowledge graphs, digital games, and web browsing. The benchmark reveals a large performance gap between commercial and open-source models on real-world agent tasks.
https://arxiv.org/abs/2308.03688 ↗B
B—Above Average
Adoption: B+Quality: AFreshness: BCitations: AEngagement: F
Specifications
- License
- Apache-2.0
- Pricing
- free
- Capabilities
- agent-evaluation, multi-environment, benchmarking, tool-use-assessment
- Integrations
- Use Cases
- agent-evaluation, research, model-comparison
- API Available
- No
- Tags
- benchmark, agents, evaluation, tool-use, multi-environment
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
68.4Adoption
78
Quality
86
Freshness
65
Citations
80
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.