Skip to main content
PaperLLMsv1.0

GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints

by Google Research · free · Last verified 2026-03-17

Introduces Grouped-Query Attention (GQA), a generalization between multi-head and multi-query attention. Achieves near multi-head quality with multi-query inference speed by sharing key-value heads across groups of query heads. Adopted by LLaMA 2, Mistral, and Falcon.

https://arxiv.org/abs/2305.13245
B
BAbove Average
Adoption: AQuality: AFreshness: B+Citations: BEngagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
efficient-attention, kv-cache-optimization, inference-speed
Integrations
huggingface-transformers, vllm
Use Cases
inference-optimization, memory-efficient-serving, long-context-inference
API Available
No
Tags
grouped-query-attention, gqa, multi-query-attention, inference-speed, kv-cache
Added
2026-03-17
Completeness
100%

Index Score

67.4
Adoption
82
Quality
88
Freshness
72
Citations
68
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service