Auto-GPT for Online Decision Making: Benchmarks and Additional Opinions
by University of Edinburgh / Allen AI · free · Last verified 2026-03-17
This paper introduces a benchmark suite for evaluating autonomous agents like Auto-GPT on online decision-making tasks. It assesses their ability in multi-step planning and tool use, analyzes common failure modes, and highlights the challenges these agents face in reliably completing long-horizon goals.
https://arxiv.org/abs/2306.02224 ↗B
B—Above Average
Adoption: B+Quality: B+Freshness: BCitations: B+Engagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- autonomous-agent-evaluation, benchmark-creation, long-horizon-task-completion, multi-step-planning-analysis, tool-use-assessment, failure-mode-analysis, llm-agent-performance-metrics, online-decision-making
- Integrations
- Use Cases
- [object Object], [object Object], [object Object], [object Object]
- API Available
- No
- Tags
- autonomous-agents, benchmarking, llm-evaluation, auto-gpt, gpt-4, decision-making, long-horizon-planning, tool-use, agentic-ai, failure-analysis, research-paper
- Added
- 2026-03-17
- Completeness
- 1%
Index Score
62.4Adoption
72
Quality
78
Freshness
64
Citations
72
Engagement
0