In-context Learning and Induction Heads
by Anthropic · free · Last verified 2026-03-17
This paper identifies induction heads—two-layer attention circuits that copy patterns from context—as a key mechanistic basis for in-context learning in transformers. The study provides strong evidence that induction heads, which emerge during a phase transition in training, are causally responsible for much of the in-context learning capability of large language models.
https://arxiv.org/abs/2209.11895 ↗B
B—Above Average
Adoption: BQuality: A+Freshness: BCitations: B+Engagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- circuit-analysis, in-context-learning-analysis, attention-mechanism-study, phase-transition-detection
- Integrations
- Use Cases
- ai-safety, model-interpretability, training-dynamics-research
- API Available
- No
- Tags
- interpretability, circuits, induction-heads, in-context-learning, mechanistic-interpretability
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
63.9Adoption
65
Quality
92
Freshness
68
Citations
78
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.