NVIDIA B100
by NVIDIA · enterprise · Last verified 2026-03-17
The NVIDIA B100 is a data center GPU based on the Blackwell architecture, succeeding the H100. It offers substantial performance improvements for AI training and inference, featuring a second-generation Transformer Engine with FP4 precision, and a fifth-generation NVLink interconnect for massive multi-GPU scaling.
https://www.nvidia.com/en-us/data-center/b100/ ↗B
B—Above Average
Adoption: B+Quality: A+Freshness: A+Citations: B+Engagement: F
Specifications
- License
- Proprietary
- Pricing
- enterprise
- Capabilities
- large-scale-ai-training, high-throughput-inference, fp4-and-fp8-compute, second-generation-transformer-engine, nvlink-5-interconnect, high-bandwidth-memory-hbm3e, multi-instance-gpu-mig, confidential-computing, high-performance-computing-hpc, gpu-accelerated-data-analytics
- Integrations
- NVIDIA DGX and HGX Systems, NVIDIA CUDA Toolkit, NVIDIA AI Enterprise Software Suite, Major Cloud Providers (AWS, Azure, GCP, OCI), OEM Server Platforms (Dell, HPE, Supermicro), NVIDIA Quantum-2 InfiniBand Platform
- Use Cases
- [object Object], [object Object], [object Object], [object Object], [object Object]
- API Available
- No
- Tags
- gpu, ai-accelerator, data-center, blackwell-architecture, deep-learning, llm-training, generative-ai, hpc, nvlink, transformer-engine, fp4
- Added
- 2026-03-17
- Completeness
- 0.85%
Index Score
65.8Adoption
70
Quality
99
Freshness
96
Citations
72
Engagement
0