Readit News logoReadit News
mgh95 · 3 months ago
Perhaps most telling in this entire report is Table 1. It shows that the non-work has grown 8x in 1 year, whereas work has only ~3.4x. Considering that non-work related usage of ChatGPT now makes up 73% of the requests, ChatGPT is very much in the consumer market, despite substantial marketing of LLM products in a professional context and even as much as compelled usage in some corporations.

Since many consumers are typically relatively tight-fisted in the b2c market, I don't think this bodes well for the long-term economics of the market. This may explain the relatively recent pivot to attempt to "discover" uses.

I don't think this ends happily.

dolphinscorpion · 3 months ago
"I don't think this ends happily."

Still, 700 million users, and they can still add a lot of products within ChatGPT. Ads will also be slapped on answers.

If all fails, Sam will start wearing "Occupy Jupiter" t-shirts.

autoexec · 3 months ago
> Ads will also be slapped on answers.

Ads won't be slapped onto answers, my guess is that they will be subtly and silently inserted into them so that you don't even notice. It won't always be what you see either as companies, political groups, and others who seek to influence you will pay to have specific words/phrases omitted from answers as well.

AI at this point is little more than a toy that outright lies occasionally yet we're already seeing AI hurting people's ability to think, be creative, use critical thinking skills, and research independently.

mgh95 · 3 months ago
And friendster at one point had over 100m users. A gross margin (and more importantly, positive cash flow) business is more important than users. This data is not a good indicator of either.
empiko · 3 months ago
IMO adding ads is not going to be that easy. It is relatively easy to implement ads when the user is scrolling through tons of content and the ads are "organically" injected into the stream. But if the user is seeking a specific answer in a chat-like GUI, what will you do exactly? Whatever ad you will show will have to be visually distinguished and the user will just scroll past to get to the "true" answer they want. Sure, you will still get the product in front of some eyes, but I would expect this to be less effective than other social-media based ads.
resfirestar · 3 months ago
The statistic is from ChatGPT consumer plans, so I don't think it says anything useful about enterprise adoption of LLM products or usage patterns in those enterprise contexts.
adeelk93 · 3 months ago
Exactly. Enterprise use has a carveout for analytics - so it wouldn’t be in the paper’s population anyways
CuriouslyC · 3 months ago
OAI has a very strong potential play in the consumer devices market. The question is if they approach it right. If OAI developed high end laptops/tablets with deep AI integration, with hardware designed around a very specific model architecture (hyper-sparse large MoE with cold expert marshalling/offloading via NVME), that would be incredibly compelling. Don't forget they've got Jony, it wouldn't just be a groundbreaking AI box, it'd be an aesthetic artifact and status symbol.
DenisM · 3 months ago
Consumers have low friction on the way in and on the way out. Especially when media hype gets involved.

Business have higher friction - legal, integrations, access control, internal knowledge leaks (a document can be restricted access but result may leak into a more open query). Not to mention the typical general inertia. This friction works both ways.

Think capacitive vs induction electric circuits.

mgh95 · 3 months ago
I don't see how friction is the primary driver here. ChatGPT is available through the most enterprise sales channel available -- Azure. The Microsoft enterprise sales engine is probably the best in the world.

Similarly, if costs double (or worse, increase to a point to be close to typical SaaS margins) and LLMs lose their shine I dont think there will be friction on the way out. People (especially executives) will offer up ChatGPT as a sacrifice.

ares623 · 3 months ago
Consumers do have very very high friction with chatbots. As clearly demonstrated by the gpt5 update and the loss of gpt4.
andy99 · 3 months ago
If people find it useful but enterprise adoption is lagging, doesn't that indicate there's still a big upside?

On the other hand, I remember when BlackBerry had enterprise locked down and got wiped out by consumer focused Apple.

In any event, having big consumer growth doesn't seem like a bad thing.

It will be bad if it starts a race to the bottom for ad driven offering though.

ares623 · 3 months ago
It’s been shoved down enterprise throats for months/years. Shareholders, CEOs, workers (at the start) and users (at the start) have never had such a unified understanding in what they want than this AI frenzy. All stars were aligned for it to gain more traction. And yet…

It’s the prodigal child of tech.

majormajor · 3 months ago
> If people find it useful but enterprise adoption is lagging, doesn't that indicate there's still a big upside?

It could indicate that many people find it more of an entertainment product than a tool, and those are often harder to monetize. You've got ads, and that's about it, and puts a probable cap on your monthly revenue per user that's less than most of the subscription prices these companies are trying to get (especially in non-USA countries).

(I find it way more of a tool and basically don't use it outside of work... but I see a LOT of AI pics and videos in discord and forums and such.)

mgh95 · 3 months ago
When Apple sells a device, they get more revenue with minimal coats turbocharging revenue and profits.

When OpenAI sells a ChatGPT subscription, they incur large costs just to serve the product, shrinking margins.

Big difference in unit economics, hence the quantization push.

EagnaIonat · 3 months ago
I think the data might be skewed.

They only analyze the consumer plans, and ignored Enterprise, Teams and Education plans.

ares623 · 3 months ago
Looks like they only included actual chats and not agentic/copilot usage. IMO that makes the study quite incomplete.
mgh95 · 3 months ago
The chats alone are backbreakingly costly relative to the market mix of ChatGPT.

Rest of the market be damned -- combined with the poor customer mix (low to middle income countries) this explains why there has been such a push by the big labs to attempt to quantize models and save costs. You effectively have highly paid engineers/scientists running computationally expensive models on some of the most expensive hardware on the market to serve instructions on how to do things to people in low income countries.

This doesn't sound good, even for ad-supported business models.

standardUser · 3 months ago
LLMs are the next ISPs, and those households who haven't yet found room for it on their monthly budgets soon will. And much like ISPs, i'd expect to see the starting $20/mo evolve over time into a full size utility bill. Not all households, of course, but at utility-scale nonetheless.
gdhkgdhkvff · 3 months ago
The difference is ISPs usually have monopoly/duopoly pricing power and LLMs already have freely available open source models. If one AI company decides they want to start gouging, they have to compete with other providers AND open source. And if all of the ai companies start colluding on price gouging, there’s always the option of new competitors cloud hosting open source models.

That said I do think eventually prices will increase somewhat, unless SOTA models start becoming profitable at current prices (my knowledge is at least 6 months old on this so maybe they have already become profitable?)

dzink · 3 months ago
Replace AI with electricity and the argument looks very different. I think the whole industry is going the Utility route over time. When electricity or railroads or shipping containers or other similar large infrastructure-cost systems were first released the value unlocked for the smallest most profitable customers expanded consumption far more than it did for the large users at the beginning. In electricity for example: Few could have predicted data centers, or crypto, or electric cars boosting demand at the start. As soon as something becomes cheaper with scale (which is what AI companies are going for) the consumption skyrockets as tech catches up. The utility down side is obviously guaranteed monopoly eventually and potentially government involvement, or in this case possibility of AI becoming a chunk of the government as well. Especially with social media content steering votes (text generation really being a tool to steer human opinions) and power, and public funding as a result.
dzink · 3 months ago
In case anyone doubts the growth and progress for scammers enabled by it: https://www.reuters.com/investigates/special-report/ai-chatb...
TriangleEdge · 3 months ago
I think AI will also enable the discovery of psychopaths and narcissists, so the dystopia mentioned is uncertain. When AI will confidently boil down someone to some labels like this, we may get competent leadership for the first time ever.
vonnik · 3 months ago
As google has shown, consumer/business market is not either/or.
faangguyindia · 3 months ago
>I don't think this ends happily.

people said same thing for youtube, "videos are bandwidth hungry", no way make money off of it.

consumer user is still feeding the LLM with training data.

qwerty_clicks · 3 months ago
Work could Be dropping from the limit of ChatGPT in the workplace and use of CoPilot in a secured Microsoft tenant.
variadix · 3 months ago
The drop in ChatGPT usage once summer started was similarly indicative of what is driving growth.
lispisok · 3 months ago
OpenAI makes a profile of you based on your chat history and people are far more personal with these things than Google search. It's gonna be a goldmine when they decide to use that profile to make money.
LeicaLatte · 3 months ago
I think the 73% non-work usage ratio will flip again within 2-3 years, but not because consumer usage shrinks. As AI becomes embedded in workflows through APIs the "work" category is set to expand dramatically.
apwell23 · 3 months ago
no
PeterStuer · 3 months ago
Absolute worst by far I have encountered is people using ChatGPT to self diagnose their presumed psychological conditions.

Ofc ChatGPT goes in hard to syncopanthically confirm all 'suggestive' leads with zero pushback.

okdood64 · 3 months ago
> syncopanthically confirm all 'suggestive' leads with zero pushback

This is true. However:

As someone who's done multiple assessments in a clinical setting for anxiety & depression, there is no special magic that requires a human to do it, and many providers are happy to confirm a diagnosis pretty quickly without digging in more. There's GAD-7 & PHQ-9 respectively. While the interview is semi-structured and there is some discretion to the interviewer (how the patient presents in terms of affect, mood, etc.) they mostly go off the quiz.

The trouble you can run into is if there's another condition or differential diagnosis which could be missed. (By both an LLM and the interviewer alike.)

qwerty_clicks · 3 months ago
I commonly switch between chatgpt, perplexity, and copilot. Whatever is closest to my mouse or shortcut. Copilot is clearly the worst of the three but I have not true loyalty or most of the time, care. I suspect I am getting weak model responses from perplexity at times but it’s good enough to keep moving fast. Sam mentioned brining memory to people, not just because it’s what ppl want but I suspect my it will help to lock ppl into one platform of snowballing context.
PolicyPhantom · 3 months ago
People don’t really use ChatGPT as a search engine replacement. It’s more about decision support, writing, and formatting tasks. That matches what I see at work: younger colleagues often use it for drafting text or templates, but not for “just looking things up.”
hawngyeedun · 3 months ago
> People don’t really use ChatGPT as a search engine replacement

Some do, and they think that they are using it as a replacement. I've been doing research on its use among college students and I've heard firsthand that some of them (especially from students in non-STEM fields) think ChatGPT can be as useful as, if not better than, search engines at times for _seeking_ information.

You may be talking to a specific subset of the population, but once you branch out and observe/hear from broader demographics, you'd be surprised to learn about people's mental model of the genAI technologies.

kristopolous · 3 months ago
This is crazy. I was having a conversation earlier where I divided the use cases and my taxonomy was almost identical: I didn't include "writing help" but everything else. Then I guessed the trends trends and nailed it with respect to the other of usage.

I mean how often to you make fairly speculative claims and then an hour later see a just published report on it and get it validated? Nuts.

I personally hate chatgpt's voice (writing style) but I guess that's a minority position.

ProllyInfamous · 3 months ago
Per article, 1 in 11 people (globally) use ChatGPT at least weekly. Smarter tech -types, as well as students, are heavier users.