Skip to main content
Datasetalignmentv1.0

Orca DPO Pairs

by Intel Labs / Community · open-source · Last verified 2026-03-17

Orca DPO Pairs is a preference dataset of 12,000 chosen-rejected pairs derived from Orca-style system prompts, where GPT-4 responses serve as chosen completions and GPT-3.5 responses serve as rejected completions. It is designed to train Direct Preference Optimization (DPO) models with clear quality contrast between responses and is widely used in the open-source community for lightweight alignment without a reward model.

https://huggingface.co/datasets/Intel/orca_dpo_pairs
B
BAbove Average
Adoption: B+Quality: AFreshness: B+Citations: BEngagement: F

Specifications

License
MIT
Pricing
open-source
Capabilities
dpo-training, preference-alignment, reward-free-rlhf
Integrations
huggingface-datasets, trl
Use Cases
dpo-finetuning, alignment-research, preference-learning
API Available
Yes
Tags
dpo, preference, alignment, synthetic, rlhf
Added
2026-03-17
Completeness
100%

Index Score

60.2
Adoption
70
Quality
80
Freshness
76
Citations
65
Engagement
0

Put AI to work for your business

Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service