Skip to main content
Paperreinforcement-learningv1.0

Deep Reinforcement Learning from Human Preferences

by OpenAI · free · Last verified 2026-03-17

This foundational RLHF paper shows that human preference comparisons between agent behaviors can train a reward model that guides deep RL agents in complex tasks like Atari games and MuJoCo locomotion, without hand-crafted reward functions. The approach reduces human labeling effort by ~3 orders of magnitude compared to direct reward specification.

https://arxiv.org/abs/1706.03741
B+
B+Good
Adoption: AQuality: A+Freshness: C+Citations: A+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
reward-learning, preference-learning, rlhf, human-in-the-loop
Integrations
Use Cases
llm-alignment, agent-training, reward-model-training
API Available
No
Tags
reinforcement-learning, rlhf, human-feedback, reward-learning, alignment
Added
2026-03-17
Completeness
100%

Index Score

78
Adoption
88
Quality
95
Freshness
58
Citations
95
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service