About Ozmarx

An independent guide to AI compute pricing — built for the people who actually have to buy it.

Why Ozmarx exists

The AI compute market is opaque by design. Hyperscalers bury real pricing behind sales calls. Neoclouds have inconsistent documentation. Comparison sites are outdated, incomplete, or quietly sponsored by the providers they rank.

Meanwhile, the teams that need this information most — ML engineers evaluating platforms, startup CTOs making infrastructure bets, procurement leads at companies scaling AI — have to piece it together themselves from pricing pages, Hacker News threads, and Reddit posts.

Ozmarx is the resource I wished existed when I was in that position. It combines rigorous pricing data with the kind of contextual analysis that actually helps you make a decision — free, independent, and updated regularly.

What makes this different

There's no shortage of GPU comparison sites. Most are auto-generated, stale within weeks, and don't explain why prices differ or what you should actually buy for your use case. Ozmarx is built differently:

  • 🔍
    Truly independent. No provider pays for placement. Pricing is presented the same way for everyone.
  • 📝
    Written by a practitioner. Every piece of analysis is written by someone who has actually bought and managed GPU compute — not an industry observer.
  • 🔄
    Updated weekly. GPU pricing moves fast. All pricing data is manually verified and timestamped. If a price changes significantly, it's flagged.
  • 🆓
    Always free. SemiAnalysis charges thousands per year for similar analysis. Ozmarx is free because the audience it serves shouldn't need a budget line item for access to basic market information.
  • 🎯
    Written for buyers. Not investors. Not industry observers. The person reading Ozmarx is about to make a purchasing decision — the content is written with that in mind.

Pricing Methodology

All prices listed on Ozmarx are:

01 Sourced directly from official provider pricing pages
02 Verified manually at least once per week
03 Listed in USD per GPU per hour (on-demand unless noted)
04 Timestamped so you know how fresh the data is
05 Never adjusted based on provider relationships

For hyperscalers, per-GPU prices are calculated by dividing total instance cost by the number of GPUs in the instance. Spot and reserved prices are estimates based on publicly available information.

Providers Tracked

Hyperscalers
AWS Google Cloud Azure
Neoclouds
CoreWeave Lambda Labs RunPod Together AI JarvisLabs Fluidstack
Marketplaces
Vast.ai Salad

Get in Touch

Found a pricing error? Want to suggest a provider I'm not tracking? Have feedback on an analysis piece? I'd love to hear from you.

📧 hello@ozmarx.com

Revenue Disclosure

Ozmarx is free to use. To sustain the site, we may earn referral fees when readers sign up for services through links on this site. These fees never influence rankings, pricing data, or editorial coverage — the same methodology applies regardless of any commercial relationship.

Any commercial relationships are disclosed clearly on relevant pages. If a provider appears in our comparison tables, it's because we've chosen to track them — not because they paid to be there. If that ever changes, it will be clearly labeled.

Ready to compare GPU pricing?

See live pricing across 12+ providers in one place.

⚡ Open Comparison Tool