Skip to main content
HardwareAI Infrastructurevv5p

Google TPU v5p

by Google · paid · Last verified 2026-03-17

Google's most powerful TPU for large-scale AI training. Features 95GB HBM2e memory per chip and is designed to train the largest frontier models via massive pod-scale configurations connected by Google's proprietary ICI interconnect.

https://cloud.google.com/tpu/docs/v5p
C+
C+Average
Adoption: C+Quality: A+Freshness: A+Citations: B+Engagement: F

Specifications

License
Proprietary
Pricing
paid
Capabilities
ai-training, inference, bfloat16-compute, int8-compute, pod-scale
Integrations
jax, tensorflow, pytorch-xla, gcp
Use Cases
frontier-model-training, large-scale-pretraining, research
API Available
Yes
Tags
tpu, data-center, training, inference, google, cloud
Added
2026-03-17
Completeness
100%

Index Score

58.7
Adoption
55
Quality
96
Freshness
90
Citations
70
Engagement
0

Put AI to work for your business

Deploy this hardware alongside autonomous AaaS agents that handle tasks end-to-end — no babysitting required.

Explore the full AI ecosystem on Agents as a Service