This throughput assumes 100% utilizations. A bunch of things raise the cost at scale:
- There are no on-demand GPUs at this scale. You have to rent them for multi-year contracts. So you have to lock in some number of GPUs for your maximum throughput (or some sufficiently high percentile), not your average throughput. Your peak throughput at west coast business hours is probably 2-3x higher than the throughput at tail hours (east coast morning, west coast evenings)
- GPUs are often regionally locked due to data processing issues + latency issues. Thus, it's difficult to utilize these GPUs overnight because Asia doesn't want their data sent to the US and the US doesn't want their data sent to Asia.
These two factors mean that GPU utilization comes in at 10-20%. Now, if you're a massive company that spends a lot of money on training new models, you could conceivably slot in RL inference or model training to happen in these off-peak hours, maximizing utilization.
But for those companies purely specializing in inference, I would _not_ assume that these 90% margins are real. I would guess that even when it seems "10x cheaper", you're only seeing margins of 50%.
Do we know how big the "batch processing" market is? I know the major providers offer 50%+ off for off-peak processing.
I assumed it was to slightly correct this problem and on the surface it seems like it'd be useful for big data places where process-eventually is enough, e.g. it could be a relatively big market. Is it?
A major issue we have right now is, we want the coding process to be more "Agentic", but we don't have an easy way for LLMs to determine what to pull into context to solve a problem. This is a problem that I am working on with my personal AI search assistant, which I talk about below:
Analyzers are the "Brains" for my search, but generating the analysis is both tedious and can be costly. I'm working on the tedious part and with batch processing, you can probably process thousands of files for under 5 dollars with Gemini 2.5 Flash.
With batch processing and the ability to continuously analyze 10s of thousands of files, I can see companies wanting to make "Agentic" coding smarter, which should help with GPU utilization and drive down the cost of software development.
However, I don’t think these companies provision capacity for peak usage and let it idle during off peak. I think they provision it at something a bit above average, and aim at 100% utilization for the max number of hours in the day. When there is not enough capacity to meet demand they utilize various service degradation methods and/or load shedding.
Is this why I get anthropic/Claude emails every single day since I signed up for their status updates? I just assumed they were working hard with production bugs but in light of this comment, if you don't hit capacity constraints every day, you are wasting money?
If you are willing to spread your workload out over a few regions getting that many GPUs on demand can be doable. You can use something like compute classes on gcp to fallback to different machine types if you do hit stockouts. That doesn't make you impervious from stock outs, but makes it a lot more resilient.
You can also use duty cycle metrics to scale down your gpu workloads to get rid of some of the slack.
Re the overnight that's why some providers are offering there are batch tier jobs that are 50% off which return over up to 12 or 24 hours for non-interactive use cases.
> These two factors mean that GPU utilization comes in at 10-20%.
Why don't these two factors cancel out? Why wouldn't a company building a private GPU cluster for their own use, also sit a workload scheduler (e.g. Slurm) in front of it, enable credit accounting + usage-based-billing on it, and then let validated customer partners of theirs push batch jobs to their cluster — where each such job will receive huge spot resource allocations in what would otherwise be the cluster's low-duty point, to run to completion as quickly as possible?
Just a few such companies (and universities) deciding to rent their excess inference capacity out to local SMEs, would mean that there would then be "on-demand GPUs at this scale." (You'd have to go through a few meetings to get access to it, but no more than is required to e.g. get a mortgage on a house. Certainly nothing as bad as getting VC investment.)
This has always been precisely how the commercial market for HPC compute works: the validated customers of an HPC cluster sending off their flights of independent "wide but short" jobs, that get resource-packed + fair-scheduled between other clients' jobs into a 2D (nodes, time) matrix, with everything getting executed overnight, just a few wide jobs at a time.
So why don't we see a similar commercial "GPU HPC" market?
I can only assume that the companies building such clusters are either:
- investor-funded, and therefore not concerned with dedicating effort to invent ways to minimize the TCO of their GPUs, when they could instead put all their engineering+operational labor into grabbing market share
- bigcorps so big that they have contracts with one big overriding "customer" that can suck up 100% of their spare GPU-hours: their state's military / intelligence apparatus
...or, if not, then it must turn out that these clusters are being 100% utilized by their owners themselves — however unlikely that may seem.
Because if none of these statements are true, then there's just a proverbial $20 bill sitting on the ground here. (And the best kind of $20 bill, too, from a company's perspective: rent extraction.)
That is what I’m doing with my excess compute , excess fabrication , CNC, laser , 3d printing , reflow oven etc capacity in between hardware revs for my main product. I also bill out my trusted sub contractors.
I validate the compute renters because ITAR. Lots of hostile foreign powers trying to access compute .
My main business is ITAR related , so I have incredibly high security in place already.
We are multi tenant from day zero and have slurm etc in place for accounting reasons for federal contracts etc. we actually are spinning up federal contracting as a service and will do a ShowHN when that launches.
Riches in the niches and the business of business :)
> Why wouldn't a company ... let validated customer partners of theirs push batch jobs
A company standing up this infrastructure is presumably not in the business of selling time-shares of infrastructure, they're busy doing AI B2B pet food marketing or whatever. In order to make that sale, someone has to connect their underutilized assets with interested customers, which is outside of their core competency. Who's going to do that?
There's obviously an opportunity here for another company to be a market maker, but that's hard, and is its own speciality.
There was some tangentially related discussion in this post: https://news.ycombinator.com/item?id=45050415, but this cost analysis answers so many questions, and gives me a better idea of how huge the margin on inference a lot of these providers could be taking. Plus I'm sure that Google or OpenAI can get more favorable data center rates than the average Joe Scmoe.
A node of 8 H100s will run you $31.40/hr on AWS, so for all 96 you're looking at $376.80/hr. With 188 million input tokens/hr and 80 million output tokens/hr, that comes out to around $2/million input tokens, and $4.70/million output tokens.
This is actually a lot more than Deepseek r1's rates of $0.10-$0.60/million input and $2/million output, but I'm sure major providers are not paying AWS p5 on-demand pricing.
Edit: those figures were per node, so the actual input and output prices would be divided by 12.$0.17/million input tokens, and $0.39/million output
AWS is absolutely not cheap, and never has been. You want to look for the hetzner of the GPU world like runpod.io where they are $2 an hour, so $16/hr for 8, that's already half of aws. You can also get a volume discount if you're looking for 96 almost certainly.
An H100 costs about $32k, amortized over 3-5 years gives $1.21 to $0.7 per hour, so adding in electricity costs and cpu/ram etc... runpod.io is running much closer to the actual cost compared to AWS.
> A node of 8 H100s will run you $31.40/hr on AWS, so for all 96 you're looking at $376.80/hr
And what stinks is that you can't even build a Dell/HPE server like this online. You have to 'request a quote' for an 'AI Server'
Going through SuperMicro, you're looking at about $60k for the server, plus 8 GPU's at $25,000 each, so you're close to $300,000 for an 8 GPU node.
Now, that doesn't include networking, storage, racks, electricity, cooling, someone to set that all up for you, $1,000 DAC cables, NVIDIA middleware, downtime as the H100's are the flakiest pieces of junk ever and will need to be replaced every so often...
Setting up a 96 H100 cluster (12 of those puppies) in this case is probably going to cost you $4-5 million. But it should cost less than AWS after a year and a half.
188M input / 80M output tokens per hour was per node I thought?
Reversing out these numbers tells us that they're paying about $2/H100/Hour (or $16/hour for a 8xH100 node).
Disclaimer (one of my sites) https://www.serversearcher.com/servers/gpu - says that a one month commit on a 8XH100 node goes for $12.91/hour. The "I'm buying the servers and putting them in COLO rate" usually works out at around $10/Hour, so there's scope here to reduce the cost by ~30% just by doing better/more committed purchasing.
Ok, so the authors apparently used atlas cloud hosting, which charges $1.80 per h100/hr, which would change the overall cost to around $0.08/ million input and $0.18/million output, which seems much more in line with massive inference margins for major providers.
This is all costs included. Thats 22k tokens per second per node, so per 8 h100's. With 12 nodes they get 264k tokens per second, or 950 million an hour. This get's you to roughly $0.2021 per million at $2 an hour for an h100, which is what they go for on services such as runpod.io . (cheaper if not paying spot-price + volume discounts).
H100's can be $2 and hour, so $192 an hour for the full cluster. They report 22k tokens per second, so ~ 80 million an hour, thats $16 an hour at $0.2 per million. Maybe a bit more for input tokens, but it seems a long way off.
Super helpful to see actual examples of what it (roughly) can look like to deploy production inference workloads, and also the latest optimization efforts.
I consult in this space and clients still don't fully understand how complex it can get to just "run your own LLM".
Just in case you have $3-4M lying around somewhere for some high quality inference. :)
SGLang quotes a 2.5-3.4x speedup as compared to the H100s. They also note that more optimizations are coming, but they haven't yet published a part 2 on the blog post.
Isn't Blackwell optimized for FP4? This blog post runs Deepseek at fp8, which is probably the sweet spot but new models with fp4 native training and inference would be drastically faster than fp8 on blackwell.
Separation of the prefill and decoding layers with sglang is quite nifty! Normally 8xH100 would barely be able to hold the 4bit quantization of the model without even considering the KV cache. One prefill node for 3 decode nodes is also fascinating, nice writeup.
Everything else uses more energy for both training and inference. Reducing the energy footprint is our highest priority in this domain. It outweighs the other considerations like it being Chinese, run by a hedge fund, etc. None of that matters if we destroy our ability to live on this planet. DeepSeek is not good enough, but we need to choose it in order to encourage competition on this front specifically. It's more important that companies focus on that than spend time improving other metrics.
This throughput assumes 100% utilizations. A bunch of things raise the cost at scale:
- There are no on-demand GPUs at this scale. You have to rent them for multi-year contracts. So you have to lock in some number of GPUs for your maximum throughput (or some sufficiently high percentile), not your average throughput. Your peak throughput at west coast business hours is probably 2-3x higher than the throughput at tail hours (east coast morning, west coast evenings)
- GPUs are often regionally locked due to data processing issues + latency issues. Thus, it's difficult to utilize these GPUs overnight because Asia doesn't want their data sent to the US and the US doesn't want their data sent to Asia.
These two factors mean that GPU utilization comes in at 10-20%. Now, if you're a massive company that spends a lot of money on training new models, you could conceivably slot in RL inference or model training to happen in these off-peak hours, maximizing utilization.
But for those companies purely specializing in inference, I would _not_ assume that these 90% margins are real. I would guess that even when it seems "10x cheaper", you're only seeing margins of 50%.
I assumed it was to slightly correct this problem and on the surface it seems like it'd be useful for big data places where process-eventually is enough, e.g. it could be a relatively big market. Is it?
A major issue we have right now is, we want the coding process to be more "Agentic", but we don't have an easy way for LLMs to determine what to pull into context to solve a problem. This is a problem that I am working on with my personal AI search assistant, which I talk about below:
https://github.com/gitsense/chat/blob/main/packages/chat/wid...
Analyzers are the "Brains" for my search, but generating the analysis is both tedious and can be costly. I'm working on the tedious part and with batch processing, you can probably process thousands of files for under 5 dollars with Gemini 2.5 Flash.
With batch processing and the ability to continuously analyze 10s of thousands of files, I can see companies wanting to make "Agentic" coding smarter, which should help with GPU utilization and drive down the cost of software development.
However, I don’t think these companies provision capacity for peak usage and let it idle during off peak. I think they provision it at something a bit above average, and aim at 100% utilization for the max number of hours in the day. When there is not enough capacity to meet demand they utilize various service degradation methods and/or load shedding.
You can also use duty cycle metrics to scale down your gpu workloads to get rid of some of the slack.
However, this all assumes realtime requirements. For batching, you can smooth over the demand curve, and you don't care about latency.
> These two factors mean that GPU utilization comes in at 10-20%.
Why don't these two factors cancel out? Why wouldn't a company building a private GPU cluster for their own use, also sit a workload scheduler (e.g. Slurm) in front of it, enable credit accounting + usage-based-billing on it, and then let validated customer partners of theirs push batch jobs to their cluster — where each such job will receive huge spot resource allocations in what would otherwise be the cluster's low-duty point, to run to completion as quickly as possible?
Just a few such companies (and universities) deciding to rent their excess inference capacity out to local SMEs, would mean that there would then be "on-demand GPUs at this scale." (You'd have to go through a few meetings to get access to it, but no more than is required to e.g. get a mortgage on a house. Certainly nothing as bad as getting VC investment.)
This has always been precisely how the commercial market for HPC compute works: the validated customers of an HPC cluster sending off their flights of independent "wide but short" jobs, that get resource-packed + fair-scheduled between other clients' jobs into a 2D (nodes, time) matrix, with everything getting executed overnight, just a few wide jobs at a time.
So why don't we see a similar commercial "GPU HPC" market?
I can only assume that the companies building such clusters are either:
- investor-funded, and therefore not concerned with dedicating effort to invent ways to minimize the TCO of their GPUs, when they could instead put all their engineering+operational labor into grabbing market share
- bigcorps so big that they have contracts with one big overriding "customer" that can suck up 100% of their spare GPU-hours: their state's military / intelligence apparatus
...or, if not, then it must turn out that these clusters are being 100% utilized by their owners themselves — however unlikely that may seem.
Because if none of these statements are true, then there's just a proverbial $20 bill sitting on the ground here. (And the best kind of $20 bill, too, from a company's perspective: rent extraction.)
I validate the compute renters because ITAR. Lots of hostile foreign powers trying to access compute .
My main business is ITAR related , so I have incredibly high security in place already.
We are multi tenant from day zero and have slurm etc in place for accounting reasons for federal contracts etc. we actually are spinning up federal contracting as a service and will do a ShowHN when that launches.
Riches in the niches and the business of business :)
A company standing up this infrastructure is presumably not in the business of selling time-shares of infrastructure, they're busy doing AI B2B pet food marketing or whatever. In order to make that sale, someone has to connect their underutilized assets with interested customers, which is outside of their core competency. Who's going to do that?
There's obviously an opportunity here for another company to be a market maker, but that's hard, and is its own speciality.
A node of 8 H100s will run you $31.40/hr on AWS, so for all 96 you're looking at $376.80/hr. With 188 million input tokens/hr and 80 million output tokens/hr, that comes out to around $2/million input tokens, and $4.70/million output tokens.
This is actually a lot more than Deepseek r1's rates of $0.10-$0.60/million input and $2/million output, but I'm sure major providers are not paying AWS p5 on-demand pricing.
Edit: those figures were per node, so the actual input and output prices would be divided by 12.$0.17/million input tokens, and $0.39/million output
An H100 costs about $32k, amortized over 3-5 years gives $1.21 to $0.7 per hour, so adding in electricity costs and cpu/ram etc... runpod.io is running much closer to the actual cost compared to AWS.
Their GPU availability is amazing though
Significantly cheaper now that most cloud providers are buying Blackwell.
And what stinks is that you can't even build a Dell/HPE server like this online. You have to 'request a quote' for an 'AI Server'
Going through SuperMicro, you're looking at about $60k for the server, plus 8 GPU's at $25,000 each, so you're close to $300,000 for an 8 GPU node.
Now, that doesn't include networking, storage, racks, electricity, cooling, someone to set that all up for you, $1,000 DAC cables, NVIDIA middleware, downtime as the H100's are the flakiest pieces of junk ever and will need to be replaced every so often...
Setting up a 96 H100 cluster (12 of those puppies) in this case is probably going to cost you $4-5 million. But it should cost less than AWS after a year and a half.
The hot parts are/were on allocation to both vendors. They try to sus out your use case and redirect you to less constrained parts.
Reversing out these numbers tells us that they're paying about $2/H100/Hour (or $16/hour for a 8xH100 node).
Disclaimer (one of my sites) https://www.serversearcher.com/servers/gpu - says that a one month commit on a 8XH100 node goes for $12.91/hour. The "I'm buying the servers and putting them in COLO rate" usually works out at around $10/Hour, so there's scope here to reduce the cost by ~30% just by doing better/more committed purchasing.
Deleted Comment
Inference is more profitable than I thought.
Is that just the cost of electricity, or does it include the cost of the GPUs spread out over their predicted lifetime?
Maybe the cost of renting?
Depreciation and GPU failure rate over time must be considered, which I don't see mentioned in the article.
I consult in this space and clients still don't fully understand how complex it can get to just "run your own LLM".
Just in case you have $3-4M lying around somewhere for some high quality inference. :)
SGLang quotes a 2.5-3.4x speedup as compared to the H100s. They also note that more optimizations are coming, but they haven't yet published a part 2 on the blog post.
This is why I use DS though. I think its the only ethical option due to its efficiency. I think that outweighs all other considerations at this point.