On-demand pricing per GPU per hour across 12+ providers. Filter by GPU type, provider category, or use case. Spot and reserved pricing included where available.
| Provider | Type | GPU | On-Demand | Spot / Preemptible | 1-yr Reserved | Availability | Notes | Link |
|---|---|---|---|---|---|---|---|---|
| Vast.ai | Marketplace | H100 SXM5 80GB | $1.87 | $0.80–1.20 | N/A | Good | Spot marketplace. Prices fluctuate. Verify before running long jobs. | vast.ai ↗ |
| RunPod | Neocloud | H100 SXM5 80GB | $1.99 | $1.09 | N/A | Good | Community cloud pricing. Secure cloud is $2.39/hr. | runpod.io ↗ |
| Lambda Labs | Neocloud | H100 SXM5 80GB | $2.49 | N/A | $1.60 | Limited | No spot. Reserved 1-yr pricing requires contract. | lambda.ai ↗ |
| CoreWeave | Neocloud | H100 SXM5 80GB | $2.99 | N/A | $1.85 | Good | Enterprise SLAs, Kubernetes native. Reserved requires 1+ yr commit. | coreweave.com ↗ |
| Google Cloud | Hyperscaler | H100 SXM5 80GB | $3.00 | $1.05 | $1.95 | Good | A3-highgpu instance. Strong TPU alternatives also available. | GCP ↗ |
| AWS | Hyperscaler | H100 SXM5 80GB | $3.90 | $1.50 | $2.30 | Good | p5.48xlarge (8× H100). Prices cut 44% in June 2025. | AWS ↗ |
| Azure | Hyperscaler | H100 SXM5 80GB | $6.98 | $2.80 | $3.99 | Limited | NC H100 v5. East US pricing. Among the highest list prices. | Azure ↗ |
| JarvisLabs | Neocloud | A100 SXM4 80GB | $1.49 | N/A | N/A | Limited | Per-minute billing. Good for short experimentation runs. | jarvislabs.ai ↗ |
| RunPod | Neocloud | A100 SXM4 80GB | $1.49 | $0.85 | N/A | Good | Community cloud. Secure cloud slightly higher. | runpod.io ↗ |
| Lambda Labs | Neocloud | A100 SXM4 80GB | $2.06 | N/A | $1.35 | Limited | Often waitlisted. Reserved pricing strongly recommended. | lambda.ai ↗ |
| AWS | Hyperscaler | A100 SXM4 80GB | $3.40 | $1.20 | $2.10 | Good | p4d.24xlarge (8× A100). Requires full instance purchase. | AWS ↗ |
| Azure | Hyperscaler | A100 SXM4 80GB | $3.43 | $1.30 | $2.20 | Limited | NC A100 v4 series. Multi-GPU instance only. | Azure ↗ |
| Google Cloud | Hyperscaler | A100 SXM4 80GB | $3.00 | $1.10 | $1.95 | Good | A2 ultra family. CUD (committed use) discounts available. | GCP ↗ |
| Vast.ai | Marketplace | A100 PCIe 40GB | $0.96 | $0.50 | N/A | Good | Marketplace pricing. High variability. | vast.ai ↗ |
| RunPod | Neocloud | A100 PCIe 40GB | $1.19 | $0.72 | N/A | Good | Good for inference. Less VRAM than 80GB variant. | runpod.io ↗ |
| RunPod | Neocloud | L40S 48GB | $0.79 | $0.45 | N/A | Good | Excellent for inference workloads. Great price/performance. | runpod.io ↗ |
| AWS | Hyperscaler | L40S 48GB | $2.10 | $0.90 | $1.30 | Good | g6e instance family. Popular for inference at scale. | AWS ↗ |
| RunPod | Neocloud | A10G 24GB | $0.39 | $0.22 | N/A | Good | Budget-friendly inference. Best for smaller models (<13B params). | runpod.io ↗ |
| AWS | Hyperscaler | A10G 24GB | $1.01 | $0.40 | $0.62 | Good | g5 instance family. Strong ecosystem and tooling. | AWS ↗ |
| RunPod | Neocloud | H100 PCIe 80GB | $2.19 | $1.15 | N/A | Good | PCIe variant — 10–20% slower than SXM5 for training. | runpod.io ↗ |
| CoreWeave | Neocloud | H100 PCIe 80GB | $2.49 | N/A | $1.65 | Good | Good for inference. More available than SXM5 variant. | coreweave.com ↗ |
The H100 is faster — but not always 2× cheaper per unit of compute. Here's how to actually decide.
Egress fees, storage, networking — AWS and GCP list prices rarely tell the full story.
Who has H100s in stock, where the waitlists are, and what spot market pricing tells us.