Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
by Google Brain · open-source · Last verified 2026-03-17
Introduced Switch Transformers, a simplified mixture-of-experts (MoE) architecture that routes each token to exactly one expert (top-1 routing), enabling trillion-parameter models with sub-linear compute scaling. Switch Transformers achieve 7x pretraining speedup over a dense T5 model while maintaining model quality.
https://arxiv.org/abs/2101.03961 ↗B+
B+—Good
Adoption: AQuality: A+Freshness: B+Citations: AEngagement: F
Specifications
- License
- Apache 2.0
- Pricing
- open-source
- Capabilities
- sparse-computation, efficient-scaling, trillion-parameter-modeling
- Integrations
- Use Cases
- efficient-pretraining, large-scale-nlp, model-scaling
- API Available
- No
- Tags
- mixture-of-experts, moe, sparse-model, scaling, efficiency
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
75.8Adoption
88
Quality
93
Freshness
71
Citations
88
Engagement
0
Put AI to work for your business
Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.