Groq
by Groq · paid · Last verified 2026-03-17
Groq is a semiconductor and AI inference company that has built the Language Processing Unit (LPU), a custom chip architected specifically for sequential token generation. Groq's inference API consistently delivers the fastest publicly available LLM inference speeds—often exceeding 800 tokens per second for large models—at competitive pricing, making it the preferred choice for latency-sensitive agentic applications.
https://groq.com ↗B
B—Above Average
Adoption: BQuality: AFreshness: ACitations: B+Engagement: F
Specifications
- License
- Proprietary
- Pricing
- paid
- Capabilities
- ultra-fast-inference, lpu-hardware, managed-inference, openai-compatible-api
- Integrations
- langchain, openai-compatible-api, vercel-ai-sdk
- Use Cases
- real-time-ai, agentic-applications, low-latency-inference, voice-ai
- API Available
- Yes
- Tags
- inference, hardware, lpu, ultra-fast-inference, api-provider
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
62.3Adoption
68
Quality
88
Freshness
86
Citations
70
Engagement
0
Put AI to work for your business
Deploy this provider alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.