Readit News logoReadit News
bubblelicious · 4 months ago
Really hard to believe articles like this and even more hard to believe this is the hive mind of hacker news today.

Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.

There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.

It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.

michaeldoron · 4 months ago
I agree completely.

I work as a ML researcher in a small startup researching, developing and training large models on a daily basis. I see the improvements done in my field every day in academia and in the industry, and newer models come out constantly that continue to improve the product's performance. It feels as if people who talk about AI being a bubble are not familiar with AI which is not LLMs, and the amazing advancements it already did in drug discovery, ASR, media generation, etc.

If foundation model development stopped right now and chatgpt would not be any better, there would be at least five if not ten years of new technological developments just to build off the models we have trained so far.

ivm · 4 months ago
Yes, HN discussions of LLMs are quite tiresome. I make indie apps, but it has been getting worse and worse over the years, as the API surfaces and UI variety of iOS and Android have grown.

Claude Code and ChatGPT brought me back to the early 2010s golden age when indies could be a one-man army. Not only code, but also for localizations, marketing. I'm even finally building some infrastructure for QA automation! And tests, lots of tests. Unimaginable for me before because I never had that bandwidth.

Not to mention that they unblock me and have basically fixed a large part of my ADHD issues because I can easily kickstart whatever task or delegate the most numbing routine work to an agent.

Just released a huge update of my language-learning app that I would never dreamed of without LLM assistance (lots of meticulous grammar-related work over many months) and have been getting a stream of great reviews. And all of that for only $100+20 a month – I was paying almost twice as much for Unity3d subscription a decade ago.

ivape · 4 months ago
All that is fine. The bubble only happens if in your ecstasy you manage to think more of your indie apps, in which case Wallstreet has no qualms about taking any rando AI app public. When this is done at scale, you create the toxic asset that 401ks pile into.

In short, you and others like you will enjoy your time, but will care very little of the systemic risk you are introducing.

But hey, whatever, gotta nut, right?

—-

I don’t mean you specifically. Companies like Windsurf, Cursor, many, they are all currently building the package for Wallstreet with literally no care that it will pull in retail investment en masse. This is going to be a fucked up rug pull for regular investors in a few years.

We’re in a much wilder financial environment since 2008. It’s very normal for crypto to be seen as a viable investment. AI is going to appear even more viable. Things are primed.

dehrmann · 4 months ago
Upvoted for a different perspective.

The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.

bubblelicious · 4 months ago
Yea a lot of this I understand and appreciate!

- offline and even online benchmarks are terrible unless actually a standard product experiment (a/b test etc). Evaluation science is extremely flawed.

- skepticism is healthy!

- measure on delivered value vs promised value!

- there are hard problems! Possibly ones that require paradigm shifts that need time to develop!

But

- delivered value and developments alone are extraordinary. Problems originally thought unsolvable are now completely tractable or solved even if you rightfully don’t trust eval numbers like LLMArena, market copy, and offline evals.

- output quality is seeing diminishing returns? I cannot understand this argument at all. We have scaled the first good idea with great success. People really believe this is the end of the line? We’re out of great ideas? We’ve just scratched the surface.

- even with a “feels” approach, people are unimpressed?? It’s subjective, you are welcome to be unimpressed. But I just cannot understand or fathom how

MattGrommes · 4 months ago
The way I've been thinking about this is that there is The Tech and The Business. The Tech is amazing and improving all the time at the core, then there are the apps being built to take advantage of the Tech, a lot of which are also amazing.

But The Business is the bubble part. Like all the companies during the first internet boom/bubble who did stuff like lay tons of fiber and raise tons of money for rickety business plans. Those companies went out of business but the fiber was still there and still useful. So I think you're right in that the Tech part is being shafted a little in the conversation because the Business part is so bubbly.

dang · 4 months ago
The community is divided about this. There's no one hivemind.

There's a general negativity bias on the internet (and probably in humans at large) which skews the discourse on this topic as any other - but there are plenty of active, creative LLM enthusiasts here.

bubblelicious · 4 months ago
I agree — probably my own selective memory and straw-manning. It just feels in my mind like the “vibe” on HN (in terms of articles that reach the front page and top rated comments) is very anti-AI. But of course even if true it is a biased picture of HN readers.

Would be interesting to see some analysis from HN data to understand just how accurate my perception is; of course doesn’t clear up the bias issue.

colinmorelli · 4 months ago
I'll take a shot at rationale for this perspective, which is similar to a peer comment:

The tech is undoubtedly impressive, and I'm sure has a ton of headroom to grow (although I have no direct knowledge of this, but I'd take you at your word, because I'm sure it's true).

But at least my perception of the idea that this is a "bubble" presently is rooted in the businesses that are created using the technology. Tons of money spent to power AI agents to conduct tasks that would be 99% less expensive to conduct via a simple API call, or because the actual unstructured work is 2 or 3 levels higher in the value chain, and given enough time, there will be new vertically integrated companies that use AI to solve the problem at the root and eliminate the need for entire categories of companies at the level below.

In other words: the root of the bubble (to me) is not that the value will never be realized, but that many (if not most) of this crop of companies, given the amount of time the workflows and technology have had to take hold in organizations, will almost certainly not be able to survive long enough to be the ones to realize it.

This also seems to be why folks draw comparison to the dot com bubble, because it was quite similar. The tech was undoubtedly world changing. But the world needed time to adapt, and most of those companies no longer exist, even though many of the problems were solved a decade later by a new startup who achieved incredible scale.

ivape · 4 months ago
I don’t think people know what the definition of this bubble is yet. I can provide one:

- AI-first app companies that actually go public on the stock exchange

- Massive influx of investment from retail as the basket of “AI” is just too much to pass up

- This basket is no longer a collection of top tier hardware and software titans, but led by resellers and wrappers like Palantir, like something like Cursor, like Windsurf, and finally rounded out with crud-apps turned publicly traded companies. Figma going public is a very bad indicator of what’s to come. Perplexity going public would be one of my biggest Red Flag moments.

- The basket I’m describing is the package that includes all these “toxic” assets.

- Some really dumb big players will lose here too because they will acquire some of these resellers and wrappers at prices they’ll never recoup (Newscorp buying MySpace).

- And finally, those who know, know, and they will bail first unscathed. Say it ain’t so, the story of our lives.

That will be the vehicle retail will pile into. We’re a little bit aways from that as companies are still building out their AI offerings. We’ll need a flurry of companies like that to go public soon after OpenAI does, sparking the beginning of one of the worst bubbles ever. You won’t be able to make sense of it because the bull market will make it impossible to not FOMO in.

That’s the systemic risk to this entire industry and the broader economy in about a few years.

Remember, humans can’t have nice things. If the secondary companies didn’t rush to the stock market as their prime imperative, we wouldn’t have to worry about it because all sensible investment will be in the large caps. The pursuit of gaudy returns will fail humans again, as always.

Stay safe and right-sized, all. The actual tech is not over-hyped.

rsynnott · 4 months ago
So, one difference between this and the dot-com bubble is that it is much, much harder to go public now, and much, much easier to raise funds as a private company. This has lead to loss-making private companies with valuations that would not have been remotely plausible a couple of decades ago. Arguably a more likely end to all of this is that the VCs turn off the tap, which will kill most companies in the space within a year or so, with fairly limited contagion to the broader markets; public companies who've gone heavily into it may be badly burned, but that would be about it.

Retail may never really get to participate at all, beyond trading Nvidia and similar.

utyop22 · 4 months ago
Lol why would VC's want retail to participate until the very end? This is the very nature of the VC game. Come on wakey wakey!!!
jbreckmckye · 4 months ago
Data point of two, but this podcast also recently floated 2027 as the crunch point: https://youtu.be/vp1-3Ypmr1Y?si=p4GlyPwZRWOkxFtt

In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments

ej88 · 4 months ago
I'm a little skeptical of a full on 2008-style 'burst'. I imagine it'll be closer to a slow deflation as these companies need to turn a profit.

Fundamentally, serving a model via API is profitable (re: Dario, OpenAI), and inference costs come down drastically over time.

The main expense comes twofold: 1. The cost of train a new model is extremely expensive. GPUs / yolo runs / data

2. Newer models tend to churn through more tokens and be more expensive to serve in the beginning before optimizations are made.

(not including payrolls)

OpenAI and Anthropic can become money printers once they downgrade the Free tiers, add ads or other attention monetizing methods, and rely on a usage model once people and businesses become more and more integrated with LLMs, which are undoubtedly useful.

gjsman-1000 · 4 months ago
ej88 · 4 months ago
Not really sure how this article refutes what I said?

He defines it as "everything that happens from when you put a prompt in to generate an output" -> but he seems to conflate inference with a query. Putting in input to generate the next single token is inference. A query or response just means the LLM repeats this until the stop token is emitted. (Happy to be corrected here)

The cost of inference per token is going down - the cost per query goes up because models consume more tokens, which was my point.

Either way, charging consumers per token pretty much guarantees that serving models is profitable (each of Anthropic's prior models turn a profit). The consumer-friendly flat 20$ subscription is not sustainable in the long run.

https://epoch.ai/data-insights/llm-inference-price-trends

https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

https://x.com/eladgil/status/1827521805755806107

toss1 · 4 months ago
There is no question LLMs are truly useful in some areas, and the LLM bubble will inevitably burst. Both can be simultaneously true, and we're just running up the big first slope on the hype curve [0].

As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).

So one question is timing — When will the crash come?

The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?

[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle

fred_is_fred · 4 months ago
We even have prior art. Web 1.0 and e-Commerce were truly useful and the bubble also burst.

I should also think further, railroads and radio also good examples!

shishy · 4 months ago
Yes well bubbles are a core part of the innovation process (new tech being useful doesn't imply a lack of bubbles), see e.g."Technological Revolutions and Financial Capital" by Carlota Perez https://en.wikipedia.org/wiki/Technological_Revolutions_and_...
heathrow83829 · 4 months ago
Unlike that time, some money is actually being made. I heard some figures thrown around yesterday, total combined investments of over 500 billion! and revenues of about 30 billion, 10 bil of which was payments to cloud providers, so actually 20 billion in revenues. that's not nothing.
jbreckmckye · 4 months ago
It might not be a paradox: Bubbles are most likely to occur when something is plausibly valuable.

If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.

Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.

tartoran · 4 months ago
When the bubble burts, what kind of effects are we going to see? What are your thoughts on this?
warkdarrior · 4 months ago
Massive layoffs from BigTech and lots of startups going under.
jihadjihad · 4 months ago
When AI is on the rise, layoffs are "because AI", and then when the AI bubble pops the layoffs are also conveniently "because AI".
ProllyInfamous · 4 months ago
Pre ChatGPT:

•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).

•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]

•Top 7 public techs are where predominant gains have grown / held

•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T

•Gold has doubled, to present.

>what kind of effects are we going to see

•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.

•UBI & conscription, if only to lessen previous bullet-point.

¢¢, hoping I'm wrong. But if I'm not, maybe we can focus on domestics instead of endless struggles abroad (reimpliment Civilian Conservation Corps?).

tim333 · 4 months ago
I think you are a bit pessimistic on the economics. AI should increase overall output and prosperity and there a bunch of ways for politicians to redistribute things if people vote for it.
tim333 · 4 months ago
> OpenAI began this hype cycle [...] and its death (or, as mentioned, some other kind of collapse, such as acquisition) is the sign that we’re done here, in the same way that FTX signaled the end of the cryptocurrency boom..

The collapse of FTX sent bitcoin from ~$20k to ~$17k. It's now $110k. I imagine the AI boom will 'collapse' in the same sort of way.

A lot of the economics depends on whether you think human level intelligence is coming or not. Zitron kind of assumes not in which case his economic doomerism makes sense. But if it does come you could effectively double gdp which is a lot of financial upside.

AndrewKemendo · 4 months ago
Having been through at least two AI hype cycles professionally, this is just another one.

Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.

I have a private list of these starting from 2006 to today.

LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.

Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.

Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs

We’ll be fine

tim333 · 4 months ago
>this is just another [hype cycle]

As a casual observer for decades I think this one is different in that we are at approximate hardware equivalence with the human brain and advancing which will have interesting economic implications.

AndrewKemendo · 4 months ago
Yeah but it will deflate slightly as hype transitions to RL (finally!) and LLMs just become boring regular “tech.”