Skip to main content
Papertrainingv1.0

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

by Google Brain · free · Last verified 2026-03-17

Introduces the Sparsely-Gated Mixture-of-Experts (MoE) layer, enabling 1000× capacity increase with only marginal computational cost increase. A learned gating network selects a sparse subset of expert sub-networks per input, enabling unprecedented model scale.

https://arxiv.org/abs/1701.06538
B+
B+Good
Adoption: AQuality: A+Freshness: CCitations: B+Engagement: F

Specifications

License
Open Access
Pricing
free
Capabilities
sparse-computation, conditional-computation, model-scaling
Integrations
Use Cases
large-scale-language-modeling, multi-task-learning, efficient-scaling
API Available
No
Tags
mixture-of-experts, moe, sparse, gating, conditional-computation, scaling
Added
2026-03-17
Completeness
100%

Index Score

70.3
Adoption
82
Quality
90
Freshness
42
Citations
78
Engagement
0

Put AI to work for your business

Deploy this paper alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service