NVIDIA B200
by NVIDIA · paid · Last verified 2026-03-17
Top-of-the-line Blackwell GPU with maximum memory and compute. Optimized for the most demanding AI training runs and large-scale inference deployments requiring maximum throughput per chip.
https://www.nvidia.com/en-us/data-center/b200/ ↗B
B—Above Average
Adoption: BQuality: A+Freshness: A+Citations: BEngagement: F
Specifications
- License
- Proprietary
- Pricing
- paid
- Capabilities
- ai-training, inference, fp4-compute, fp8-compute, nvlink5
- Integrations
- cuda, tensorrt, nccl, cudnn
- Use Cases
- llm-training, frontier-model-training, inference-serving
- API Available
- No
- Tags
- gpu, data-center, training, inference, blackwell
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
63Adoption
65
Quality
100
Freshness
97
Citations
68
Engagement
0
Put AI to work for your business
Deploy this hardware alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.