An independent guide to AI compute pricing — built for the people who actually have to buy it.
The AI compute market is opaque by design. Hyperscalers bury real pricing behind sales calls. Neoclouds have inconsistent documentation. Comparison sites are outdated, incomplete, or quietly sponsored by the providers they rank.
Meanwhile, the teams that need this information most — ML engineers evaluating platforms, startup CTOs making infrastructure bets, procurement leads at companies scaling AI — have to piece it together themselves from pricing pages, Hacker News threads, and Reddit posts.
Ozmarx is the resource I wished existed when I was in that position. It combines rigorous pricing data with the kind of contextual analysis that actually helps you make a decision — free, independent, and updated regularly.
There's no shortage of GPU comparison sites. Most are auto-generated, stale within weeks, and don't explain why prices differ or what you should actually buy for your use case. Ozmarx is built differently:
All prices listed on Ozmarx are:
For hyperscalers, per-GPU prices are calculated by dividing total instance cost by the number of GPUs in the instance. Spot and reserved prices are estimates based on publicly available information.
Found a pricing error? Want to suggest a provider I'm not tracking? Have feedback on an analysis piece? I'd love to hear from you.
📧 hello@ozmarx.comOzmarx is free to use. To sustain the site, we may earn referral fees when readers sign up for services through links on this site. These fees never influence rankings, pricing data, or editorial coverage — the same methodology applies regardless of any commercial relationship.
Any commercial relationships are disclosed clearly on relevant pages. If a provider appears in our comparison tables, it's because we've chosen to track them — not because they paid to be there. If that ever changes, it will be clearly labeled.
See live pricing across 12+ providers in one place.
⚡ Open Comparison Tool