If you know anything about tech, you will know that tech as an industry is highly deflationary--billionares use the same iPhones as you do! (in contrast, they don't drive the same cars you do)
This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech. So the more chips they can produce for a given fab, the more profit they can make, meaning that companies are incentivized to sell as many products as possible for as low a price as possible.
We're supply constrained in the short-term because demand for these AI tools is so high that TSMC and other chip manufacturers can't keep up. But long term, supply/demand will equalize and tech will continue its deflationary trend. Sure, the frontier will always require the best possible chips, but AI coding is highly competitive, and competition drives price decreases. So prices may stay high right now, but it seems unlikely to me that this will stay true long-term.
All four of the author's steelmanned arguments at the end for a price decrease seem likely to come true already: competition is intense (OAI brags about how much cheaper they are compared to Claude), OAI subsidizes open-source influencers already, companies' earnings calls all call for more investment in fabs, and we're already close to saturating all of the benchmarks used for RL!
It is still exactly the same iPhone tech-wise, just with a custom "case".
I wouldn't go as far to call it "brain disease" though: in a sense, it is OK for someone well off to spend on expensive products (made by less rich), so things would equalize at least a bit.
Just like we in IT might happily pay 3% of our salary on slightly better shoes, and someone else would claim we have a "brain disease" because you can get perfectly good shoes for 5x less money.
This is something that is often cited as trueism, but there is no natural laws which makes this a necessary true. There is plenty of room for black swans in market laws. So much so that the term black swan is probably better known in the field of economy then any other field.
Competition may drive down the price of LLMs, however there is a greater then zero probability that it won‘t, and if it won’t, your whole counter-argument falls apart.
I can't think of any high volume/consumer electronics/computer technology that has not been driven down in price over time. So based on historical precedent, I think your "greater than zero probability" might be only a tiny bit greater than zero.
> The top tier subscription prices are increasing exponentially
WILD graph that misrepresents what is happening.
There's a bunch of $20 subscriptions, and a bunch of $200 subscriptions. Devin has a $500 subscription. That's it.
The cost per unit of intelligence has been dropping every month. The cost per "completed task" has also been dropping. There is no sign of this reversing course. Graphing the price of a subscription, without taking into account what that subscription is getting you, is poor authorship.
"The underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth. --@jeffowski"
While I don't think that's the only purpose, I can't help but think that people that become dependent on these tools will have neither wealth nor skill. Keep your skills sharp!
I don't see what's non-sensical. The point of this technology, to many of the super wealthy, is to commoditize white collar skills and drive the price to zero. They're not really coy about this, if anything they won't stop talking about it. They couch it in language about being "afraid", but if they truly were they'd take steps to reduce harm - which they're not doing. The silver lining I guess is most of them don't actually understand the things they're trying to replace so they don't realize how far off they are.
I think Warhol’s quote is nostalgic but incomplete.
I’m priced out of the best cars, best houses, best home theater systems, best schools. Even someone making $300k/year can’t afford all of the best of everything.
Sure, the iPhone has been “the best” possible phone which was also used by nearly everyone, but I think that’s an anomaly even in the short run.
Right now I’m paying $200/mo for Claude code to do an amount of work I would’ve had to pay $10,000/mo for. Of course I’m expecting those numbers to get closer to each other.
It’s a common tactic. Shock an industry with a new product and advertise it as being very affordable. Once you get a solid consumer base with enough organizations that have rebuilt their operations around it, slowly increase the cost and find more ways to produce revenue.
It all depends. Yes, something like that happened with Uber, but computers and consumer electronics have Moore's law working for them, so prices usually go down. (With occasional shortages like we see now with RAM - not for the first time, but it's usually temporary.)
My guess is that AI will be more like consumer electronics than like Uber.
From my recent experience with Qwen 3.5 I am less concerned about this. It certainly will never be “the best” but I did some TS refactoring with Qwen + Opencode over the weekend and it was surprisingly good. I even asked Opus 4.6 to grade the commits and it usually gave it a B- haha..
Anyway, it might be worth it to invest in an LLM rig today if you’re paranoid.
You can use hosted versions of Qwen or any other of the smaller models w/o having to invest in a LLM rig today... I plan to use the cheap hosted models until hardware advances enough to host locally in method that is more cost effective than the API costs. A couple of the hosting providers I'm looking at are https://synthetic.new and https://openrouter.ai .. I'm sure there are tons more.
I used Qwen 3.5 for image descriptions and I was shocked at how great it was. Open Source models may be very useful now, one year ago they were really bad.
I used to take Uber to work daily in 2016. It cost around 3 - 4 dollars per 5 miles ride. Now the same ride cost $24 [0]. There's no indication that AI coding tools won't follow the same path given they are funded by VC.
But I think what matters is that the new generation of coders will adopt it as the norm. Gone are the days where you download a free text editor and just trial and error with the documentation one tab away. Every bootcamp is teaching react with clause and cursor. You have to pay to for a subscription to build your BMI calculator.
>Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.
Yes, but every high performing open weights model coming out of China has (supposedly) been caught distilling frontier models.
It seems like a lot of people are making assumptions about the state of the open weights ecosystem based on information that may not be accurate. And if the big labs are able to reliably block distillation, we could see divergence between the two groups in terms of performance.
> And if the big labs are able to reliably block distillation,
The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.
The article is obviously bad (I quitted reading after the second paragraph) but one side effect of AI training is the increasing cost of hardware. We have commoditization of models... while reversing commoditization of hardware.
> OpenAI reportedly discussed charging $20k/month on PhD-level research agents with investors.
At this price point, it will be cheaper to hire a bunch of actual PhDs. The vast majority who will not earn anything close to 250k per year in most of the world.
I also seriously question what even does PhD-level mean in the context of a model? Someone with a PhD has developed a very deep but narrow knowledge of a particular domain and has contributed to at least pushing out our sphere of knowledge a tiny bit in that pillar of competency. A model is a best a brittle, fractured and often inconsistent representation of written human knowledge and lacks most basic intuitive grounding in the world due to the lack of embodiment.
In my experience, to safely get any value out of an LLM, you have to be more knowledgeable than the LLM on a topic. So in this case, you'd really need a PhD to use this tool, so at best its a $20k a month research aid, which honestly is far more expensive than a handful of grad students, and probably less effective.
Yes, but I suspect part of the pitch is that the “PhD level models” will accomplish 50x as much in a year as the human PhD, because they’re faster AND they work 24/7.
Whether they can deliver is another question, but I wouldn’t bet my career that they can’t.
This boils down to the fact that chip fabs have massive fixed costs and near-zero marginal costs, and these chips power all of tech. So the more chips they can produce for a given fab, the more profit they can make, meaning that companies are incentivized to sell as many products as possible for as low a price as possible.
We're supply constrained in the short-term because demand for these AI tools is so high that TSMC and other chip manufacturers can't keep up. But long term, supply/demand will equalize and tech will continue its deflationary trend. Sure, the frontier will always require the best possible chips, but AI coding is highly competitive, and competition drives price decreases. So prices may stay high right now, but it seems unlikely to me that this will stay true long-term.
All four of the author's steelmanned arguments at the end for a price decrease seem likely to come true already: competition is intense (OAI brags about how much cheaper they are compared to Claude), OAI subsidizes open-source influencers already, companies' earnings calls all call for more investment in fabs, and we're already close to saturating all of the benchmarks used for RL!
Not if they have the brain disease which makes this kind of thing appealing:
https://caviar.global/catalog/custom-iphone/iphone-17/?sort=...
Yes that flagship model incorporates an actual Rolex Daytona in solid gold.
I wouldn't go as far to call it "brain disease" though: in a sense, it is OK for someone well off to spend on expensive products (made by less rich), so things would equalize at least a bit.
Just like we in IT might happily pay 3% of our salary on slightly better shoes, and someone else would claim we have a "brain disease" because you can get perfectly good shoes for 5x less money.
> https://caviar.global/catalog/custom-iphone/iphone-17/london...
This is something that is often cited as trueism, but there is no natural laws which makes this a necessary true. There is plenty of room for black swans in market laws. So much so that the term black swan is probably better known in the field of economy then any other field.
Competition may drive down the price of LLMs, however there is a greater then zero probability that it won‘t, and if it won’t, your whole counter-argument falls apart.
But what powers the chips?
You're talking about chip economics. Inference economics requires electricity dynamics.
WILD graph that misrepresents what is happening.
There's a bunch of $20 subscriptions, and a bunch of $200 subscriptions. Devin has a $500 subscription. That's it.
The cost per unit of intelligence has been dropping every month. The cost per "completed task" has also been dropping. There is no sign of this reversing course. Graphing the price of a subscription, without taking into account what that subscription is getting you, is poor authorship.
Although there is an underlying truth: using LLMs for large-context tasks like coding is still extremely expensive.
Didn’t happen for me.
On the Plus plan newer models reached the limit faster so less tasks where done until I had to wait 5 hours
"The underlying purpose of AI is to allow wealth to access skill while removing from the skilled the ability to access wealth. --@jeffowski"
While I don't think that's the only purpose, I can't help but think that people that become dependent on these tools will have neither wealth nor skill. Keep your skills sharp!
I’m priced out of the best cars, best houses, best home theater systems, best schools. Even someone making $300k/year can’t afford all of the best of everything.
Sure, the iPhone has been “the best” possible phone which was also used by nearly everyone, but I think that’s an anomaly even in the short run.
Right now I’m paying $200/mo for Claude code to do an amount of work I would’ve had to pay $10,000/mo for. Of course I’m expecting those numbers to get closer to each other.
No VC-funded gravy train lasts forever.
My guess is that AI will be more like consumer electronics than like Uber.
You can get a table from Ikea that costs a fraction of what an artisan makes. They're not the same final product but their functions is the same.
Anyway, it might be worth it to invest in an LLM rig today if you’re paranoid.
But I think what matters is that the new generation of coders will adopt it as the norm. Gone are the days where you download a free text editor and just trial and error with the documentation one tab away. Every bootcamp is teaching react with clause and cursor. You have to pay to for a subscription to build your BMI calculator.
[0]: https://idiallo.com/blog/paying-for-my-8-years-old-ride
Everything points to commoditization of models. Open/distilled models lag behind frontier only by 6-12 months.
Regulatory capture is the only thing I’m scared of with regards to tooling options and cost.
Yes, but every high performing open weights model coming out of China has (supposedly) been caught distilling frontier models.
It seems like a lot of people are making assumptions about the state of the open weights ecosystem based on information that may not be accurate. And if the big labs are able to reliably block distillation, we could see divergence between the two groups in terms of performance.
The big labs will not be able to reliably block distillation without further inhibiting general use of the models, which itself will help tip the balance away from commercial models.
At this price point, it will be cheaper to hire a bunch of actual PhDs. The vast majority who will not earn anything close to 250k per year in most of the world.
In my experience, to safely get any value out of an LLM, you have to be more knowledgeable than the LLM on a topic. So in this case, you'd really need a PhD to use this tool, so at best its a $20k a month research aid, which honestly is far more expensive than a handful of grad students, and probably less effective.
Whether they can deliver is another question, but I wouldn’t bet my career that they can’t.