Fast Transformer Decoding: One Write-Head is All You Need (Multi-Query Attention)
by Google Brain · free · Last verified 2026-03-17
Introduces Multi-Query Attention (MQA), sharing a single set of key-value heads across all query heads during decoding. Achieves significant memory bandwidth savings for autoregressive generation with minimal quality degradation, pioneering efficient LLM inference.
https://arxiv.org/abs/1911.02150 ↗B
B—Above Average
Adoption: B+Quality: AFreshness: CCitations: BEngagement: F
Specifications
- License
- Open Access
- Pricing
- free
- Capabilities
- efficient-decoding, kv-cache-optimization, autoregressive-generation
- Integrations
- Use Cases
- inference-optimization, memory-efficient-serving
- API Available
- No
- Tags
- multi-query-attention, mqa, inference-speed, kv-cache, decoding
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
63.2Adoption
75
Quality
85
Freshness
45
Citations
65
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.