Skip to main content
brand
context
industry
strategy
AaaS
Benchmarkbenchmarks-evaluationv1.0

AIME 2025

by MAA / Community Eval · free · Last verified 2026-04-24

AIME (American Invitational Mathematics Examination) 2025 is used as a frontier math reasoning benchmark for LLMs. The competition-level math problems require multi-step reasoning without lookup, making AIME scores a direct indicator of a model's mathematical problem-solving depth. Frontier models are evaluated on the 2025 problem set to avoid training data contamination.

https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions
C
CBelow Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F

Specifications

License
Proprietary
Pricing
free
Capabilities
Integrations
Use Cases
API Available
No
Tags
benchmark, math, reasoning, competition, aime, frontier
Added
2026-04-24
Completeness
60%

Index Score

44
Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service