Skip to main content
brand
context
industry
strategy
AaaS
BenchmarkAI Agentsv1.0

API-Bank

by Li et al. / Wuhan University · free · Last verified 2026-03-17

API-Bank is a comprehensive benchmark for evaluating tool-augmented LLMs. It features 73 diverse APIs and assesses models on three levels: API retrieval, API calling, and complex planning. The benchmark measures both the correctness of tool selection and the accuracy of execution, providing a thorough test of an agent's capabilities.

https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/api-bank
C+
C+Average
Adoption: BQuality: AFreshness: B+Citations: BEngagement: F

Specifications

License
Apache-2.0
Pricing
free
Capabilities
Evaluating tool-augmented LLMs, Benchmarking API retrieval, Benchmarking API calling, Assessing multi-step agent planning, Testing tool selection accuracy, Measuring execution correctness, Supporting diverse API categories, Providing three distinct difficulty levels
Integrations
Use Cases
[object Object], [object Object], [object Object], [object Object]
API Available
No
Evaluated Models
gpt-4o, claude-opus-4, llama-3-70b, toolllama
Metrics
accuracy, api-selection-accuracy, call-accuracy
Methodology
Three-level evaluation: Level 1 retrieves correct API from catalog; Level 2 generates correct API call; Level 3 plans and executes multi-step API sequences. Each level assessed with exact-match and functional correctness metrics.
Last Run
2026-01-12
Tags
tool-use, api-call, agents, multi-step, planning, benchmark, evaluation, llm-evaluation, agent-benchmark, tool-augmented-llm
Added
2026-03-17
Completeness
0.8%

Index Score

58.8
Adoption
62
Quality
85
Freshness
74
Citations
68
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service