If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.
Let's say tomorrow OpenAI and Anthropic have a huge down round, or whatever event people think would mark the end of the bubble. That doesn't mean suddenly nobody is using AI. It means they have to rapidly reduce burn e.g. not doing new model versions, laying off staff and reducing the comp of those that remain, hiking prices a lot, getting more serious about ads and other monetized features. They will still be selling plenty of inferencing.
In practice the action is mostly taking place out of public markets. We won't necessarily know what's happening at the most exposed companies until it's in the rear view mirror. Bubbles are a public markets phenomenon. See how "ride sharing"/taxi apps played out. Market dumping for long periods to buy market share, followed by a relatively easy transition to annual profitability without ever going public. Some investors probably got wiped along the way but we don't know who exactly or by how much.
Most likely outcome: AI bubble will deflate steadily rather than suddenly burst. Resources are diverted from training to inferencing, new features slow down, new models are weaker and more expensive than new models and the old models are turned off anyway. That sort of thing. People will call it enshittification but it'll really just be the end of aggressive dumping.
Outsourcing this to an LLM is similar to an airplane stall .. I just dip mentally. The stress goes away too, since I assume the LLM will get rid of the "problem" but I have no more incentives to think, create, solve anything.
Still blows my mind how different people approach some fields. I see people at work who are drooling about being able to have code made for them .. but I'm not in that group.
+100 for this.
There's an on device LLM that is packaged in iOS, iPadOS and macOS 26 (Tahoe) [1]. They even have a HIG on use of generative AI [2]
Something like half of all macs are running macOS 26 [3] already, so this could be the most widely distributed on-device LLM on the planet.
I think people are sleeping on this, partly because the model is seen as under powered. But I think we can presume it won't always be so.
I've just posted a Show HN of app for macOS 26 I created that uses Apple's local LLM to summarize conversations you've had with Claude Code and Codex. [3]
I've been somewhat surprised at the quality and reliability of Apple's built-in LLM and have only been limited by the logic I've built around it.
I think Apple's packaging of an LLM in its core operating systems is actually a fast move with AI and even has potential to act as an existential threat to Windows.
[1] https://developer.apple.com/videos/play/wwdc2025/286/
[2] https://developer.apple.com/design/human-interface-guideline...
So I don't see what unique advantage this gives Apple. These days people's data lives mostly in the cloud. What's on their phone is just a local cache.
Sorry, this just made shivers run up my back.