Skip to main content
Datasetinstruction-tuningv1.0

Nectar

by UC Berkeley · open-source · Last verified 2026-03-17

A high-quality preference dataset from Berkeley containing 183,000 prompts each with 7 ranked responses collected from ChatGPT, GPT-4, and open-source LLMs. Designed specifically for training reward models and RLHF pipelines, with multi-source response diversity.

https://huggingface.co/datasets/berkeley-nest/Nectar
B
BAbove Average
Adoption: BQuality: AFreshness: B+Citations: B+Engagement: F

Specifications

License
Apache-2.0
Pricing
open-source
Capabilities
reward-model-training, rlhf, preference-learning
Integrations
huggingface-datasets
Use Cases
rlhf, reward-modeling, alignment-research
API Available
No
Tags
rlhf, preference-data, ranked-responses, reward-model, berkeley
Added
2026-03-17
Completeness
100%

Index Score

61.6
Adoption
66
Quality
86
Freshness
73
Citations
72
Engagement
0

Put AI to work for your business

Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service