Readit News logoReadit News
benlivengood · 16 days ago
I dunno, GPT-OSS and Llama and QWEN and any half dozen of other large open-weight models?

I really can't imagine OpenAI or Anthropic turning off inference for a model that my workplace is happy to spend >$200*person/month on. Google still has piles of cash and no reason to turn off Gemini.

The thing is, if inference is truly heavily subsidized (I don't think it is, because places like OpenRouter charge less than the big players for proportionally smaller models) then we'd probably happily pay >$500 a month for the current frontier models if everyone gave up on training new models because of some oddball scaling limit.

crimsoneer · 16 days ago
Yeah, this is silly. Plenty of companies are hosting their own now, sometimes on prem. This isn't going away
iLoveOncall · 16 days ago
> we'd probably happily pay >$500 a month for the current frontier models

Try $5,000. OpenAI loses hundreds of billions a year, they need a 100x, not 2x.

gingersnap · 16 days ago
But they are not losing 100x on inference on high paying customers. Their biggest loss is free user + training/development cost
weirdmantis69 · 16 days ago
Why lie on a site where people know things.
filoleg · 16 days ago
OpenAI loses hundreds of billions a year on inference? I strongly doubt it
ndriscoll · 16 days ago
$60k/yr still seems like a good deal for the productivity multiplier you get on an experienced engineer costing several times that. Actually, I'm fairly certain that some optimizations I had codex do this week would already pay for that from being able to scale down pod resource requirements, and that's just from me telling it to profile our code and find high ROI things to fix, taking only part of my focus away from planned work.

Another data point: I gave codex a 2 sentence description (being intentionally vague and actually slightly misleading) of a problem that another engineer spent ~1 week root causing a couple months ago, and it found the bug in 3.5 minutes.

These things were hot garbage right up until the second they weren't. Suddenly, they are immensely useful. That said, I doubt my usage costs anywhere near that much to openai.

apf6 · 16 days ago
> it's the running costs of these major AI services that are also astronomical

There's wildly different reports about whether the cost of just inference (not the training) is expensive or not...

Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”

But a lot of folks are convinced that inference prices are currently being propped up by burning through investor capital?

I think if we look at open source model hosting then it's pretty convincing - Look at say https://openrouter.ai/z-ai/glm-4.7 . There's about 10 different random API providers that are competing on price and they'll serve GLM 4.7 tokens at around $1.50 - $2.50 per output Mtokens. (which by the way is a tenth of the cost of Opus 4.5)

I seriously doubt that all these random services that no one has ever heard of are also being propped up by investor capital. It seems more likely that $1.50 - $2.50 is the "near cost" price.

If that's the actual cost, and considering that the open source models like GLM are still pretty useful when used correctly, then it's pretty clear that AI is here to stay.

UncleEntity · 16 days ago
>> Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”

Any individual Sunday service is nearly cost free if we don't calculate in the 100+ years it took to build the church...

apf6 · 16 days ago
Lol anyway, the point is that even in a scenario where all the major models disappeared tomorrow (including OpenAI, Anthropic, etc), we would still keep using the existing open source models (GLM, Deepseek, Qwen) for a long long time.

There's no scenario where AI goes away completely.

I don't think the "major AI services go away completely" scenario is realistic at all when you look at those companies' revenue and customer demand, but that's a different debate I guess.

program_whiz · 16 days ago
Perhaps this is a helpful model, rather than worrying about the "billions spent" and whether its inference vs training.

How much would it cost you to deploy a model that you and maybe a few coworkers could effectively use? $400k probably to buy all the hardware required to host a top-tier model that could do a few hundred tokens per second for 10 concurrent users? That's $40k per person. Ammortize the hardware over 5 years, thats $8k per person per year (roughly), with no training costs (that's just you buying hardware and running it yourself). So that means, you need ~$800 per user monthly just to cover hardware to run the model (this is with no staffing costs, internet, taxes, electricity, hosting, housing, etc).

So just food for thought, but $200 claude code is probably still losing money even just on inference.

Since they are in the software realm, they are probably shooting for a 90% profit margin. Using the above example, it would be ($800 + R&D + opex) x 10. My guess is assuming no more training (which probably can never be profitable at current rates), they need $20k per month per user, which is why that number was floated by OpenAI previously.

estimator7292 · 16 days ago
After a hypothetical AI crash, the cost of hardware will plummet. It will suddenly become quite affordable to spin up a GPU or five on-prem to host a couple of models for internal use.

The only reason hardware is so expensive now is to scalp the hyperscalers. Once that demand crashes, the supply will skyrocket and prices will crash.

program_whiz · 16 days ago
Fair point -- but my overall point about how much users may have to pay to make these companies profitable stands. Maybe if prices stay depressed for years, but these companies are doing buildout at current prices, and they need to make returns on hardware they are buying now. I suppose they could bank on prices coming down in 2 to 3 years by a factor of 10, then current price ($200 per month) might be profitable (disregarding training, employees, power, etc).
Blemiono · 16 days ago
A RTX 6000 pro costs 9k. You can run good models with 96gb of memory.

It can easily serve 10 people or more depending on the overall usage pattern (coding vs everything else).

So now imagine Hardware getting better every year, models getting better too and everything overall gets more efficient.

M1 vs M4 apple increased performance by 100% in 4 years

And there are inferencing optimized chips like groq.

Don't forget kv cache and overall optimizations.

I think your math is off.

And ai is already better than interns. An intern costs you at least 1k per month probably 2k.

For me the math works just fine.

davidfiala · 16 days ago
Missing Option 3) hardware and software continue to evolve and AI becomes cost efficient at the same price and eventually even lower
UncleEntity · 16 days ago
There's no reason I can think of where this isn't the case.

I mean, we're not even up to the "Model T" era of AI development and more like in the 'coach-built' phase where every individual instance needs a bunch of custom work and tuning. Just wait until they get them down to where every Teddy Ruxpin has a full LLM running on a few AA batteries and then see where the market lands.

I always imagine these AI discussion in the context of a bunch of horses discussing these 'horseless carriages' circa 1900...

t0mas88 · 16 days ago
The author may have a point, but the handwavy numbers read as if he has no idea how accounting works. Seems like he doesn't understand capex vs opex and how they influence profitability (and their cashflow effects)
pvab3 · 16 days ago
Training is the expensive part here. It seems much more likely that the training of these models slows down drastically and is written off as a sunk cost, a few companies continue running inference on years-old models, and the free versions go away.
iLoveOncall · 16 days ago
This is addressed in the very first sentence of the article that you obviously didn't read.
pvab3 · 16 days ago
No it's not. It never makes any distinction between training and inference. It just lumps it all together as "running" the models.
gingersnap · 16 days ago
But he's not wrong. Training + inference on free customers is the black hole here.
crazygringo · 16 days ago
Except it's not. And the footnote that might be expected to clarify turns out to be a joke footnote.
pixl97 · 16 days ago
As a business in our current age you are stuck in a valley between two wildly different risks.

1. AI disappears, goes up in price, etc. All the money you've spent goes up in smoke, or you have to spend a lot more money to keep the engine running.

2. AI does not disappear, becomes cheaper and eats your businesses primary revenue generation for lunch.

Number 1 could happen tomorrow. Number 1 could happen after number 2. Number 1 may never happen.

Also expect that even if the AI market crashes that AI has already massively changed the economy, and that at least some investment will go into making AI more efficient and at any point number 2 could spring out of nowhere yet again.

yellowapple · 16 days ago
> Self-hosting an AI with your own hardware is probably just as cost-prohibitive, even if you don't value your time. In part because a ton of people will get this idea at the same time, impacting hardware prices even more. And the operating costs of AI seem significant. Would it even be possible to setup your own AI and achieve the same productivity level?

I know this is probably an annoying question, but… has the author actually tried self-hosting an AI with one's own hardware? I have; ollama (and various frontends thereof) makes it straightforward, and it's absolutely not cost-prohibitive — I've ran my share of LLMs even on laptops without dedicated GPUs at all, and while the experience wasn't great compared to the commercial options, it wasn't outright unusable, either. Locally-hosted LLMs are already finding their way into various applications; that's only going to get more viable over time, not less (unless the computing hardware industry takes a catastrophic nosedive, in which case AI affordability is arguably the least of our worries).

I'm sure the author understands this and is just being hyperbolic in the article's title, but the AI bubble bursting ≠ AI disappearing, for the same reason the dotcom bubble bursting ≠ the World Wide Web disappearing. The bubble will burst when AI shifts from being novel to being mundane, just as with any other technology-related bubble — and that entails a degree of affordability and ubiquity that's mutually exclusive with any notion of AI “disappearing”. Hopefully it'll mean companies being less motivated to shove AI “features” down everyone's throats, but the virtually-intelligent cat is already out of Pandora's box: the technology's here to stay, and I think it's presumptuous to think the race to the bottom w.r.t. cost is anywhere near the finish line.

estimator7292 · 16 days ago
If you have a linux machine, you can just install ollama, ollama-cuda, or ollama-rocm. That's it. It runs out of the box. If your GPU is supported, that Just Works, too. Usually, anyway.

I have an old dual Xeon server from about 2015. 32 2.4GHz cores and 128GB of RAM. It runs models painfully slow (and loud) but they run just fine. My modern Ryzen system from last year works out of the box with full AMD GPU support.

I have yet to find a situation where ollama doesn't work at all out of the box. It literally just turns on and goes. Maybe slow, maybe without GPU, but by god you'll have an LLM running