Readit News logoReadit News
some_random · 2 months ago
So what exactly is the "AI lifestyle subsidy"? The article doesn't seem entirely clear on it seeing as the last line essentially asks this same question. Some friends and I have been taking advantage of cheap GPU time from a company trying to break into that space, and of course lots of AI tools are being sold below cost but is that really it? Compare "GPU time is cheaper" to the classical "$10 steaks delivered directly to your house", I'm never going to get steaks delivered at the real price but I'm still going to rent a GPU when I need it even if the cost is sustainable. All these tools might get more expensive, or the models will get better so you don't need top end one, or maybe we'll just figure out how to run models for cheaper, but real steak prices and the cost of delivery have only gone up. I just don't think this is quite as comparable.
netdevphoenix · 2 months ago
>So what exactly is the "AI lifestyle subsidy"

The world's richest subsiding the real cost of offering AI services with the current state of our technology.

Once it's clear the AGI won't come anytime in 20X, where X is under 40, money tap will begin to close

danaris · 2 months ago
> AGI won't come anytime in 20X, where X is under 40

Honestly, I think that's quite generous. And I only phrase it that way, rather than more like "that X should be 99" because trying to predict more than about 15 years out in tech, especially when it comes to breakthroughs, is a fool's errand.

But that's what it's going to take to reach AGI: a genuine unforeseeable breakthrough that lets us do new things with machine learning/AI that we fundamentally couldn't do before. Just feeding LLMs more and more stuff won't get them there—and they're already way into the diminishing-returns territory.

some_random · 2 months ago
So is the lifestyle being subsidized that of those researchers Zuck hired for $100M? That's a meaningfully different usage of the phrase than the original "millennial lifestyle subsidy" to the point where the comparison isn't useful. Or again, is it just the fact AI products are being offered below cost?
barrkel · 2 months ago
Are they subsidizing it?

Training is definitely "subsidized". Some think it's an investment, but with the pace of advancement, depreciation is high. Free users are subsidized, bit their data is grist for the training mill so arguably they come under the training subsidy. /

Is paid inference subsidized? I don't think it is by much.

hackable_sand · 2 months ago
I would wait until 2100.
flowerthoughts · 2 months ago
> The world's richest

... or your defined-benefits pension fund trying desperately to stay solvent.

queenkjuul · 2 months ago
They mean that AI services will get worse (ad-driven or more expensive). So the models will eventually be tweaked to serve revenue generation, not usefulness, just like Google. Enjoy the subsidized, "genuinely* trying to be useful" era we're in now, because it won't last
tim333 · 2 months ago
The "AI lifestyle subsidy" is a bad analogy to things like 1/3 cost Uber rides which were fun while they lasted. A friend of mine found a hack for the limo service and got about 30 of those for free. I'm not sure people are saying wow, I'm living the AI lifestyle. The dot com boom seems a better model for what's happening now.
bluGill · 2 months ago
What price would you pay for GPU - if it was $10000 per hour would you still pay? What you are really saying is you think there is a reasonable price that enough people like you would pay that allows the sellers to make enough money to offer it.
pier25 · 2 months ago
In 2024 OpenAI generated some $3.5B in revenue and still lost like $5B. It means they spent something like $8.5B to run this thing [1].

They would have lost less money if they had been selling dollars at 50 cents.

[1] https://www.cnbc.com/2024/09/27/openai-sees-5-billion-loss-t...

disqard · 2 months ago
It's telling that this 100% factual comment barely elicits a response today -- tells you that this is standard practice, apparently.
throwanem · 2 months ago
This has been coming for a long time and it is why I use local models only. I'm willing to give up capabilities in exchange for being able to trust that whatever biases may exist in the models I do use remain static and predictable.
likium · 2 months ago
We only have access to local models because they're subsidized too. There's nothing to prevent companies or state actors from paying/funding for increased probability and inherent bias.

Also local models are close in capabilities now but who knows in a few years what that'll look like.

acoard · 2 months ago
> We only have access to local models because they're subsidized too.

Yes, and the flow of future models may dry up, but the current local models we'll have forever.

throwanem · 2 months ago
Eh. Files on my hard disks change when I say. And getting hooked on ChatGPT is like - is exactly like - getting addicted to money. If I benefit less from what I do use, I'll accept the trade of never having that rug yanked out from under me. It looks to me like raw model capabilities are topping out anyway; the engineering around them looks like making more difference in the back half of the decade, and I see nothing essential about that requiring nuclear-powered datacenter pyramids or whatever.
unshavedyak · 2 months ago
I agree with this, though i'm still using Claude atm. I figure if we're aware of the downsides you pointed at then we can skip the fast changing landscape of self hosting. It keeps getting cheaper and cheaper to self host, so i'm not sure at what point it makes sense to invest.

For me the switching point will probably be when they (AI companies) start the big rug pull. By then my hope is self hosting will be cheaper, better, easier, etc.

Kon-Peki · 2 months ago
Perhaps you should evaluate in terms of the price premium for speed. Sometimes you buy milk at the 7-eleven instead of the grocery store. It costs more, but is worth it for the convenience in the situation you are currently in. Most of the time it is not.

You can buy a used recent PC for a hundred or two, cram it full of memory, and then run a very advanced model. Slowly. But if you are planning to run an agent while you sleep and then review the work in the morning, do you really care if the run time is 4 hours instead of 40 seconds? Most of the time, no. Sometimes, yes.

throwanem · 2 months ago
Better not to form the habit, I thought. I'm sure I miss out on some things that way, but that is the lesser risk.

I do use the Gemini assistant that came with this Android, in the same cases and with the same caveats with which I use Siri's fallback web search. As a synthesist of web search results, an LLM isn't half bad, when it doesn't come as a surprise to be hearing from one at least.

haolez · 2 months ago
How do you do it? Do you host on your hardware or do you use cloud-based providers for open models?
yzjumper · 2 months ago
Not the user above, but I am using the iOS app PrivateLLM when I need offline access or use uncensored models. I use kappa-3-phi-abliterated, models under 6B usually work without crashing. Using Ollama on my Mac Mini 24GB base processor (M4 not M4 pro), I am able to run 7B models. On the mac I am able to set up API access.

Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.

An M4 Pro would do much better do to the increase in RAM and GPU size.

mansilladev · 2 months ago
If you have Docker (Desktop) installed, with just a couple of clicks, you can get a local model going on your computer. llama3.2 (3B), llama3.3 (70B), deepseek-r1, and about a dozen others.
politelemon · 2 months ago
Locally it's pretty simple to run models on GPUs, even low powered ones. Have a look at gpt4all as a starting point but there are plenty of offerings in this space.
egypturnash · 2 months ago
You have a very curious definition of "local" if that includes "cloud-based providers".
atentaten · 2 months ago
What does your hardware setup look like?
yzjumper · 2 months ago
Not the user above, but I am using the iOS app PrivateLLM when I need offline access or use uncensored models. I use kappa-3-phi-abliterated, models under 6B usually work without crashing. Using Ollama on my Mac Mini 24GB base processor (M4 not M4 pro), I am able to run 7B models. On the mac I am able to set up API access.

Funny enough the mac has almost the same processor as my iPhone 16 Pro, so its just a RAM constraint, and of course PrivateLLM does not let you host an API.

AstroBen · 2 months ago
Even if unintentional, pushing of products is already happening. If you ask any AI for a tech stack to create a web app you'll get recommendations for Vercel, AWS and co. This is going to be the new SEO
madcaptenor · 2 months ago
This is basically the new "nobody got fired for buying IBM".
pphysch · 2 months ago
Wielded responsibly, this can be a good thing, because bad practices can be directly fine-tuned out of the model. Thinking about how much junk pollutes legacy knowledge domains, like webdev.

But yes it will be abused for advertisement as well.

spwa4 · 2 months ago
I think you have totally misunderstood what enshittification is about. You'll ask for a webapp, and a salesforce link will come out, that charges $10 per visitor, and $10k to get your data back out, along with a 30% kickback from salesforce to OpenAI or whoever.
ChrisMarshallNY · 2 months ago
I just released a very minor update to one of my iOS apps.

The approval took 3 days. It hasn't taken 3 days in almost a decade.

The Mac version was approved in a couple of hours.

I'm quite sure that the reason for the delay is that Apple is being deluged by a tsunami of AI-generated crapplets.

Also, APNS server connections have suddenly slowed to a crawl. Same reason, I suspect.

As far as I'm concerned, the "subsidy" can't end fast enough.

jasonthorsness · 2 months ago
The bet is that the cost for delivering the same results will go down, through hardware or software advancements. This bet still seems reasonable based on how things have gone so far. Providers right now are willing to burn money acquiring a customer base, it's like really really expensive marketing.
pier25 · 2 months ago
Even if the cost goes down it will not change the fact they need to recoup like a trillion dollars before AI starts generating any profit.

And there's really no timeline for costs going down. It seems the only way to get better models is by processing more data and adding more tokens which is only increasing the complexity of it all.

bryanlarsen · 2 months ago
The bet is that costs will go down enough so that ad-supported AI will become profitable. This is not a positive outcome, a large part of the article is about the evils of ad-supported.
jsnell · 2 months ago
The costs are already far, far below that level. The only reason the consumer-facing businesses are not profitable is that nobody is yet showing ads, i.e. providing service to hundreds of millions of people with no monetization at all. LLM inference is cheap, but not free. But the moment they start showing ads, even basic ad formats will easily make them profitable. Let alone more sophisticated LLM-native ad formats, or the treasure trove of targeting data that a LLM chat profile can provide.
some_random · 2 months ago
Is that really the bet? Is it not enough for a $20 per month subscription to be sustainable with the free level being a trial for that subscription?
barrenko · 2 months ago
Future ain't what it used to be. The web is dead (worse actually, it's a putrid rotting zombie, destroying our children's lives and ours), but the internet will survive.
tim333 · 2 months ago
Funny - it still seems alive when I use it, including typing this very comment.
GaggiX · 2 months ago
>worse actually, it's a putrid rotting zombie, destroying our children's lives and ours

What are you talking about, is this a rant against TikTok or other socials?

jkingsman · 2 months ago
I suspect they're referring to "dead internet theory"[0], and extending the metaphor to zombies in that internet content will still appear to be written by humans/be organic, but will instead be AI slop.

[0]: https://en.wikipedia.org/wiki/Dead_Internet_theory

seydor · 2 months ago
The zero-interest-rate money went into stocks. The stocks have now grown to monstrous valuations able to subsidize free products for decades. If in danger, there is a loot of leeway for layoffs in all tech companies. Whatsapp was 10 employees. The subsidy will go on