Cerebras Inference
by Cerebras Systems · paid · Last verified 2026-04-24
Cerebras provides cloud inference powered by its Wafer-Scale Engine (WSE) chip, delivering some of the highest token throughput for large language models. Cerebras Inference serves Llama and other open-weight models with hardware-level advantages that push tokens-per-second beyond what GPU clusters can achieve for certain model sizes.
https://inference.cerebras.ai ↗C
C—Below Average
Adoption: C+Quality: B+Freshness: ACitations: CEngagement: F
Specifications
- License
- Proprietary
- Pricing
- paid
- Capabilities
- Integrations
- Use Cases
- API Available
- No
- Tags
- inference, wse, high-throughput, llama, custom-hardware, speed
- Added
- 2026-04-24
- Completeness
- 60%
Index Score
44Adoption
50
Quality
70
Freshness
80
Citations
40
Engagement
0