Skip to main content
DatasetComputer Visionv1.0

DataComp-1B

by DataComp Consortium · open-source · Last verified 2026-03-17

A curated 1.28 billion image-text pair dataset produced through the DataComp benchmark competition, which challenged participants to filter a 12.8 billion pair candidate pool to produce the best downstream CLIP model. DataComp-1B represents the winning filtering strategy and achieves state-of-the-art zero-shot classification performance among datasets of its size.

https://huggingface.co/datasets/mlfoundations/datacomp_1b
B
BAbove Average
Adoption: B+Quality: A+Freshness: B+Citations: AEngagement: F

Specifications

License
CC-BY-4.0
Pricing
open-source
Capabilities
vision-language-pretraining, image-text-alignment, zero-shot-classification
Integrations
hugging-face
Use Cases
clip-training, vision-language-pretraining, research
API Available
Yes
Tags
multimodal, image-text, benchmark, data-curation, clip
Added
2026-03-17
Completeness
100%

Index Score

66.6
Adoption
71
Quality
91
Freshness
72
Citations
80
Engagement
0

Put AI to work for your business

Deploy this dataset alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service