Skip to main content
PaperLLMsv1.0

Let's Verify Step by Step

by OpenAI · free · Last verified 2026-03-17

Demonstrated that process-based reward models (PRMs), which provide feedback on each reasoning step, substantially outperform outcome-based reward models (ORMs) for training LLMs to solve mathematical reasoning problems. The paper also introduced PRM800K, a dataset of 800K step-level human feedback labels on MATH solutions.

https://arxiv.org/abs/2305.20050
B+
B+Good
Adoption: AQuality: A+Freshness: ACitations: AEngagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
step-level-feedback, math-reasoning, reward-modeling, process-supervision
Integrations
Use Cases
mathematical-reasoning, process-supervision, rlhf-training
API Available
No
Tags
process-reward-models, reasoning, rlhf, math, step-by-step
Added
2026-03-17
Completeness
100%

Index Score

71.6
Adoption
82
Quality
94
Freshness
81
Citations
80
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service