NVIDIA DGX H100
by NVIDIA · enterprise · Last verified 2026-03-17
The NVIDIA DGX H100 is a purpose-built AI supercomputer, serving as the foundational building block for large-scale AI infrastructure. It integrates eight H100 Tensor Core GPUs with high-speed NVLink interconnects, providing a turnkey solution for the most demanding AI training, inference, and data analytics workloads.
https://www.nvidia.com/en-us/data-center/dgx-h100/ ↗B
B—Above Average
Adoption: B+Quality: A+Freshness: ACitations: AEngagement: F
Specifications
- License
- Proprietary
- Pricing
- enterprise
- Capabilities
- 8x H100 SXM5 GPUs, 4th Generation NVLink Interconnect, NVSwitch Fabric, FP8 Precision with Transformer Engine, Dual Intel Xeon Platinum 8480C Processors, 30.72TB NVMe Flash Storage, NVIDIA ConnectX-7 and BlueField-3 DPUs, Pre-installed NVIDIA AI Enterprise Software Suite, Scalable to DGX SuperPOD architecture
- Integrations
- NVIDIA DGX SuperPOD, NVIDIA Base Command Manager, NVIDIA AI Enterprise Software, Kubernetes and container orchestration platforms, Third-party data center storage solutions, InfiniBand and Ethernet networking fabrics
- Use Cases
- [object Object], [object Object], [object Object], [object Object], [object Object]
- API Available
- No
- Tags
- ai-supercomputer, large-scale-training, enterprise-ai, hopper-architecture, data-center-hardware, generative-ai, hpc, nvlink, nvswitch, infiniband, digital-twin
- Added
- 2026-03-17
- Completeness
- 0.95%
Index Score
67.9Adoption
70
Quality
97
Freshness
80
Citations
82
Engagement
0