BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
by Salesforce Research · open-source · Last verified 2026-03-17
Presented BLIP-2, which bridges the modality gap between frozen image encoders and frozen LLMs using a lightweight Querying Transformer (Q-Former) trained in two stages. BLIP-2 achieves state-of-the-art VQA performance with significantly fewer trainable parameters than prior methods.
https://arxiv.org/abs/2301.12597 ↗B+
B+—Good
Adoption: AQuality: A+Freshness: B+Citations: AEngagement: F
Specifications
- License
- BSD-3-Clause
- Pricing
- open-source
- Capabilities
- visual-question-answering, image-captioning, image-text-retrieval, visual-reasoning
- Integrations
- huggingface
- Use Cases
- multimodal-qa, image-captioning, zero-shot-vqa
- API Available
- No
- Tags
- blip-2, multimodal, q-former, bootstrapping, vision-language
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
71.9Adoption
83
Quality
91
Freshness
78
Citations
82
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.