Skip to main content
Paperai-evaluationv1.0

Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference

by LMSYS / UC Berkeley · free · Last verified 2026-03-17

Introduces Chatbot Arena, a platform for crowdsourced human evaluation of LLMs via pairwise comparisons using an Elo rating system. The arena has collected over 240K human votes across 50+ models, revealing human preference rankings that often diverge from standard benchmark leaderboards and providing a complementary evaluation signal.

https://arxiv.org/abs/2403.04132
B+
B+Good
Adoption: AQuality: AFreshness: B+Citations: AEngagement: F

Specifications

License
Apache-2.0
Pricing
free
Capabilities
human-preference-evaluation, elo-ranking, pairwise-comparison, crowdsourced-evaluation
Integrations
Use Cases
model-evaluation, human-preference-assessment, research
API Available
No
Tags
evaluation, human-preference, elo, arena, chatbot, benchmark
Added
2026-03-17
Completeness
100%

Index Score

74
Adoption
88
Quality
89
Freshness
76
Citations
84
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service