Exclusive Offer
VULTR
🚀 Get $300 in Vultr credits!For new customers · Credits valid for 30 days · Subject to terms
Claim $300 Now →
View program terms
GuideMarch 20, 202612 min read

NVIDIA H100 Price in 2026: Cloud Pricing, Cost Analysis & Where to Rent

The NVIDIA H100 is the gold standard GPU for AI and machine learning in 2026. But how much does it actually cost — and where should you rent one? This complete guide covers H100 pricing, cloud provider comparisons, buy vs rent analysis, and total cost of ownership calculations.

Quick Answer: H100 cloud rental prices range from $2.00–$4.50/hr depending on the provider. The cheapest on-demand H100 is on Voltage Park and Vast.ai at ~$2.00–$2.10/hr. Purchasing an H100 outright costs $25,000–$40,000 per GPU.

What Is the NVIDIA H100 and Why Does It Dominate AI/ML?

Built on NVIDIA's Hopper architecture, the H100 delivers up to 3–6x the performance of the A100 for transformer-based models. With 80GB of HBM3 memory, 3.35 TB/s bandwidth, and native FP8 support, it has become the default choice for LLM training, fine-tuning, and high-throughput inference. Every major AI lab — from OpenAI to Anthropic to Meta — relies heavily on H100 clusters.

How Much Does an H100 Cost? Buying vs Renting

Purchasing an H100 GPU

  • H100 SXM (80GB HBM3): $25,000–$35,000 per GPU
  • H100 PCIe (80GB HBM3): $25,000–$30,000 per GPU
  • DGX H100 system (8x H100): $250,000–$350,000

Purchase prices have dropped 20–30% from the 2024 scarcity-driven peak. Buying only makes sense for organizations running GPUs at high utilization 24/7 for multiple years.

Renting an H100 on the Cloud

  • On-demand cloud rental: $2.00–$4.50/hr per GPU
  • Spot/preemptible pricing: $1.50–$2.50/hr (with interruption risk)
  • Monthly reserved: 10–25% discount over on-demand

At $2.50/hr, running an H100 for 1,000 hours costs $2,500 — a fraction of the $30,000+ purchase price. For most teams, renting is the clear winner.

H100 Cloud Pricing Comparison (March 2026)

ProviderH100 Price/hrVariantBilling
Voltage Park$2.00–$2.50SXM 80GBPer-hour
Vast.ai$2.10SXM / PCIePer-second
TensorDock$2.20–$2.80SXM / PCIePer-second
RunPod$2.49SXM 80GBPer-second
Lambda Labs$2.89SXM 80GBPer-hour
CoreWeave$2.95SXM 80GBPer-minute
Hyperstack$2.95SXM 80GBPer-hour
Google Cloud (A3)~$4.10SXM 80GBPer-second
AWS (P5)~$4.15SXM 80GBPer-second
Microsoft Azure$5.40SXM 80GBPer-hour

Cheapest on-demand H100: Voltage Park and Vast.ai at $2.00–$2.10/hr. Best value with reliability: Lambda Labs at $2.89/hr with strong uptime and ML-focused support.

H100 80GB SXM vs PCIe: Price and Performance Differences

FeatureH100 SXMH100 PCIe
Memory80GB HBM380GB HBM3
Memory Bandwidth3.35 TB/s2.0 TB/s
FP16 Performance989 TFLOPS756 TFLOPS
FP8 Performance1,979 TFLOPS1,513 TFLOPS
NVLinkYes (900 GB/s)No
TDP700W350W
Cloud Price$2.50–$4.15/hr$1.89–$3.00/hr

Key takeaway: The SXM variant is 30–40% faster for training workloads due to higher memory bandwidth and NVLink support for multi-GPU communication. PCIe is cheaper and sufficient for single-GPU inference. For multi-GPU training, always choose SXM.

When Is the H100 Worth It vs Cheaper Alternatives?

H100 vs A100

  • H100 advantage: 3–6x faster for transformer training, FP8 support, 67% more memory bandwidth
  • A100 advantage: 30–50% cheaper ($1.50–$2.50/hr vs $2.00–$4.15/hr)
  • Choose H100: Training 13B+ parameter models, production inference for 70B+ models, time-sensitive runs
  • Choose A100: Budget-constrained teams, smaller models (<10B), non-urgent training

H100 vs L40S

  • H100 advantage: 2x memory bandwidth (HBM3 vs GDDR6), NVLink for multi-GPU, superior training performance
  • L40S advantage: 30–40% cheaper ($1.50–$2.00/hr), 48GB VRAM, strong FP8 inference
  • Choose H100: Multi-GPU training, memory-bandwidth-bound workloads, models >30B parameters
  • Choose L40S: Single-GPU FP8 inference, 7B–30B models, tighter budgets

Total Cost of Ownership: H100 Cloud Scenarios

How much will an H100 actually cost for your project? Here are three common usage scenarios:

100 Hours (Experimentation / Fine-tuning)

ProviderTotal Cost
Vast.ai ($2.10/hr)$210
Lambda Labs ($2.89/hr)$289
CoreWeave ($2.95/hr)$295
AWS ($4.15/hr)$415

500 Hours (Serious Training Project)

ProviderTotal Cost
Vast.ai ($2.10/hr)$1,050
Lambda Labs ($2.89/hr)$1,445
CoreWeave ($2.95/hr)$1,475
AWS ($4.15/hr)$2,075

1,000 Hours (Production / Extended Training)

ProviderTotal Cost
Vast.ai ($2.10/hr)$2,100
Lambda Labs ($2.89/hr)$2,890
CoreWeave ($2.95/hr)$2,950
AWS ($4.15/hr)$4,150

At 1,000 hours, choosing Vast.ai over AWS saves over $2,000 — enough to fund another 1,000 hours of H100 compute on Vast.ai.

H100 Price Trends: 2024–2026

H100 cloud pricing has dropped significantly as supply caught up with demand:

  • Late 2023: $4.00–$8.00/hr (extreme scarcity, waitlists everywhere)
  • Mid 2024: $3.00–$5.00/hr (supply improving, new providers launching)
  • Early 2025: $2.50–$4.00/hr (market maturing, competition increasing)
  • March 2026: $2.00–$4.15/hr (stable market, wide availability)

Prices are expected to continue a gradual decline through 2026 as NVIDIA ships more H100/H200 units and the B100/B200 generation enters cloud data centers. The H100 will remain the workhorse GPU for AI workloads for at least another 12–18 months.

Frequently Asked Questions

How much is an H100 per hour?

H100 cloud rental costs range from $2.00/hr (Voltage Park, spot) to $5.40/hr (Azure, on-demand). The market average for on-demand H100 SXM is approximately $2.50–$3.50/hr on dedicated GPU clouds, and $4.00–$5.40/hr on hyperscalers (AWS, GCP, Azure).

Is the H100 worth the premium over A100?

Yes, if you are training models with 13B+ parameters or need fast turnaround on large training runs. The H100 is 3–6x faster for transformer workloads, meaning a 3-day A100 training job can finish in 8–16 hours on H100. For smaller models and inference, the A100 often provides better cost-efficiency.

Should I buy or rent an H100?

Rent unless you will run the GPU at high utilization for 2+ years. At $2.50/hr, the break-even point for buying ($30,000) is approximately 12,000 hours — about 16 months of 24/7 operation. Factor in cooling, power ($0.50–$1.00/hr in electricity for SXM), maintenance, and depreciation, and renting is usually the better financial decision.

What is the cheapest way to get H100 access?

Spot instances on Vast.ai ($1.50–$2.10/hr) or Voltage Park ($2.00–$2.50/hr) offer the lowest H100 pricing. The trade-off is potential interruption, so always checkpoint your training jobs. For reliable on-demand access, Lambda Labs at $2.89/hr is the best value.

When will H100 prices drop further?

H100 prices will likely continue to decrease gradually through 2026 as NVIDIA's next-generation B100/B200 GPUs become available in cloud data centers. Expect H100 pricing to settle around $1.50–$2.50/hr by late 2026 as supply fully normalizes.

Compare H100 Prices Across 17+ Providers

Stop overpaying for H100 compute. Find the cheapest H100 instance in seconds with real-time price comparison.

Compare H100 Prices Now →

Compare GPU Cloud Prices Now

Save up to 80% on your GPU cloud costs with our real-time price comparison.

Start Comparing →

Get GPU Price Alerts

Be notified when prices drop for your favorite GPUs

No spam. Unsubscribe anytime.