NVIDIA H100
by NVIDIA · paid · Last verified 2026-03-17
NVIDIA's flagship data center GPU based on the Hopper architecture. Designed for large-scale AI training and inference with Transformer Engine and FP8 support. Delivers breakthrough performance for LLM training and HPC workloads.
https://www.nvidia.com/en-us/data-center/h100/ ↗A
A—Great
Adoption: A+Quality: A+Freshness: ACitations: A+Engagement: F
Specifications
- License
- Proprietary
- Pricing
- paid
- Capabilities
- ai-training, inference, fp8-compute, nvlink, transformer-engine
- Integrations
- cuda, tensorrt, nccl, cudnn
- Use Cases
- llm-training, inference-serving, hpc, scientific-computing
- API Available
- No
- Tags
- gpu, data-center, training, inference, hopper
- Added
- 2026-03-17
- Completeness
- 100%
Index Score
80.1Adoption
95
Quality
98
Freshness
85
Citations
90
Engagement
0
Put AI to work for your business
Deploy this hardware alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.