In-context Learning and Induction Heads
by Anthropic · free · Last verified 2026-03-17
This paper establishes a causal link between specific transformer circuits, termed "induction heads," and the phenomenon of in-context learning. It demonstrates that these two-layer attention patterns, which copy and complete sequences, emerge predictably during training and are a key mechanistic driver of few-shot learning abilities in LLMs.
https://arxiv.org/abs/2209.11895 ↗B
B—Above Average
Adoption: BQuality: A+Freshness: BCitations: B+Engagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- mechanistic-interpretability, circuit-analysis, in-context-learning-analysis, attention-mechanism-study, causal-intervention-analysis, phase-transition-detection, transformer-behavior-prediction, model-scaling-analysis
- Integrations
- Use Cases
- [object Object], [object Object], [object Object], [object Object], [object Object]
- API Available
- No
- Tags
- interpretability, circuits, induction-heads, in-context-learning, mechanistic-interpretability, transformer-architecture, attention-mechanisms, phase-transitions, llm-theory, causal-analysis
- Added
- 2026-03-17
- Completeness
- 0.9%
Index Score
63.9Adoption
65
Quality
92
Freshness
68
Citations
78
Engagement
0