Skip to main content
brand
context
industry
strategy
AaaS
BenchmarkLLMsv1.0

RULER

by Hsieh et al. / NVIDIA · free · Last verified 2026-03-17

RULER is a synthetic benchmark for evaluating large language models in long-context scenarios, scaling from 4K to 128K tokens. It assesses complex skills like multi-hop retrieval, aggregation, and coreference resolution, offering a more nuanced analysis than simple 'needle-in-a-haystack' tests.

https://github.com/hsiehjackson/RULER
B
BAbove Average
Adoption: B+Quality: A+Freshness: ACitations: B+Engagement: F

Specifications

License
Apache-2.0
Pricing
free
Capabilities
long-context evaluation (4K-128K tokens), multi-hop information retrieval testing, aggregative question answering assessment, coreference resolution evaluation, synthetic benchmark data generation, fine-grained analysis of LLM reasoning, comparative benchmarking of LLMs
Integrations
Use Cases
[object Object], [object Object], [object Object], [object Object]
API Available
No
Evaluated Models
gpt-4o, claude-opus-4, gemini-2-5-pro, llama-3-70b
Metrics
accuracy
Methodology
Synthetic tasks generated at configurable context lengths (4K–128K). Four task categories: NIAH (single/multi-key/multi-value), variable tracking, aggregation, and QA. Averaged accuracy across categories at each context length.
Last Run
2026-02-28
Tags
long-context-evaluation, llm-benchmark, retrieval-testing, synthetic-data, multi-hop-retrieval, question-answering, coreference-resolution, needle-in-haystack, scalable-benchmark, reasoning-benchmark
Added
2026-03-17
Completeness
0.9%

Index Score

65.2
Adoption
71
Quality
90
Freshness
82
Citations
75
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service