Skip to main content
BenchmarkAI Agentsv1.0

API-Bank

by Li et al. / Wuhan University · open-source · Last verified 2026-03-17

API-Bank is a comprehensive benchmark for evaluating tool-augmented LLMs across 73 APIs spanning daily-use categories. It tests three levels of difficulty — API retrieval, API calling, and plan-then-call — measuring both tool selection and execution correctness.

https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/api-bank
C+
C+Average
Adoption: BQuality: AFreshness: B+Citations: BEngagement: F

Specifications

License
Apache-2.0
Pricing
open-source
Capabilities
evaluation, api-calling, tool-selection, agent-planning
Integrations
Use Cases
model-evaluation, ai-agents, tool-augmented-llm
API Available
No
Evaluated Models
gpt-4o, claude-opus-4, llama-3-70b, toolllama
Metrics
accuracy, api-selection-accuracy, call-accuracy
Methodology
Three-level evaluation: Level 1 retrieves correct API from catalog; Level 2 generates correct API call; Level 3 plans and executes multi-step API sequences. Each level assessed with exact-match and functional correctness metrics.
Last Run
2026-01-12
Tags
tool-use, api-call, agents, multi-step, planning
Added
2026-03-17
Completeness
100%

Index Score

58.8
Adoption
62
Quality
85
Freshness
74
Citations
68
Engagement
0

Explore the full AI ecosystem on Agents as a Service