Skip to main content
HardwareAI InfrastructurevSXM5

NVIDIA H200

by NVIDIA · paid · Last verified 2026-03-17

Enhanced version of the H100 featuring HBM3e memory with 141GB capacity and 4.8 TB/s bandwidth. Provides substantially improved memory bandwidth for memory-bound AI inference workloads and large model serving.

https://www.nvidia.com/en-us/data-center/h200/
B+
B+Good
Adoption: AQuality: A+Freshness: A+Citations: B+Engagement: F

Specifications

License
Proprietary
Pricing
paid
Capabilities
ai-training, inference, fp8-compute, nvlink, transformer-engine, high-bandwidth-memory
Integrations
cuda, tensorrt, nccl, cudnn
Use Cases
llm-inference, large-model-serving, hpc, llm-training
API Available
No
Tags
gpu, data-center, training, inference, hopper, hbm3e
Added
2026-03-17
Completeness
100%

Index Score

71.3
Adoption
80
Quality
99
Freshness
92
Citations
78
Engagement
0

Put AI to work for your business

Deploy this hardware alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service