Skip to main content
brand
context
industry
strategy
AaaS
HardwareAI InfrastructurevNVL72

NVIDIA GB200 NVL72

by NVIDIA · enterprise · Last verified 2026-03-17

The NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale system designed for exascale AI. It connects 36 Grace Blackwell Superchips, comprising 72 B200 GPUs and 36 Grace CPUs, via fifth-generation NVLink to function as a single massive GPU for training and inferencing on trillion-parameter models with unprecedented performance and energy efficiency.

https://www.nvidia.com/en-us/data-center/gb200-nvl72/
C+
C+Average
Adoption: C+Quality: A+Freshness: A+Citations: B+Engagement: F

Specifications

License
Proprietary
Pricing
enterprise
Capabilities
Trillion-parameter model training, Real-time LLM inference, Second-Generation Transformer Engine, FP4 and FP6 precision support, Fifth-generation NVLink fabric, Liquid-cooled rack-scale design, Integrated Grace CPU and Blackwell GPU, High-bandwidth memory (HBM3e), Decompression engine for data processing
Integrations
NVIDIA CUDA Platform, NVIDIA AI Enterprise Software Suite, TensorRT-LLM, PyTorch, TensorFlow, JAX, Standard data center liquid-cooling infrastructure, Ethernet and InfiniBand networking fabrics
Use Cases
[object Object], [object Object], [object Object], [object Object], [object Object]
API Available
No
Tags
gpu, data-center, training, inference, blackwell, grace-blackwell, rack-scale, hpc, supercomputing, liquid-cooling, nvlink, generative-ai
Added
2026-03-17
Completeness
0.95%

Index Score

58.8
Adoption
50
Quality
100
Freshness
98
Citations
75
Engagement
0

Need this tool deployed for your team?

Get a Custom Setup

Explore the full AI ecosystem on Agents as a Service