Language Models are Few-Shot Learners (GPT-3)
by OpenAI · free · Last verified 2026-03-17
Introduced GPT-3, a 175B parameter language model demonstrating remarkable few-shot learning capabilities across diverse tasks. Showed that scaling model size dramatically improves in-context learning without gradient updates, reshaping the field.
https://arxiv.org/abs/2005.14165 ↗A
A—Great
Adoption: A+Quality: A+Freshness: CCitations: A+Engagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- few-shot-learning, text-generation, code-generation, in-context-learning
- Integrations
- Use Cases
- few-shot-nlp, text-generation, code-synthesis, question-answering
- API Available
- No
- Tags
- gpt-3, few-shot, in-context-learning, scaling, openai, foundational
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
82Adoption
95
Quality
96
Freshness
42
Citations
99
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.