Conservative Q-Learning for Offline Reinforcement Learning
by UC Berkeley · free · Last verified 2026-03-17
CQL (Conservative Q-Learning) addresses distribution shift in offline RL by augmenting the standard Bellman objective with a term that penalizes Q-values for out-of-distribution actions, producing a lower bound on the true value function. This conservative approach prevents over-optimistic value estimation and achieves strong performance across locomotion, navigation, and robotic manipulation datasets.
https://arxiv.org/abs/2006.04779 ↗B
B—Above Average
Adoption: B+Quality: A+Freshness: BCitations: AEngagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- offline-rl, conservative-value-estimation, distribution-shift-handling, batch-rl
- Integrations
- Use Cases
- offline-rl-training, robotic-control, healthcare-rl
- API Available
- No
- Tags
- reinforcement-learning, offline-rl, q-learning, conservative-estimation, distribution-shift
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
69.8Adoption
75
Quality
90
Freshness
62
Citations
87
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.