Readit News logoReadit News
jerf · a year ago
I don't know about "Winter". The original "AI Winter" was near-total devastation. But it's probably reasonable to think that after the hype train of the last year or two we're due to be headed into the Trough of Disillusionment for LLM-based AI technologies on the standard Gartner hype cycle: https://en.wikipedia.org/wiki/Gartner_hype_cycle
mewpmewp2 · a year ago
Maybe, but modern AI, I find already immensely useful in a lot of ways and I see things constantly improving. E.g. realistic video generation, music generation, OpenAI advanced voice mode - it's still wild to me how good these are and how well LLMs can perform.

I still remember even when seeing GPT3.5 I thought it must be impossible what it can do and that there must be some sort of trickery involved, but no.

I feel like I'm still impressed and amazed daily what AI can do now.

jerf · a year ago
Note the "trough of disillusionment" does not drop to 0.

It is also a measure of hype, not utility.

The current crop of AI tools have their uses and they aren't going away. However, the hype was basically built on the principle of "YOU CAN JUST WAVE AI AT ANYTHING AND REMOVE ALL THE PEOPLE!!!1!", without any need to think about how the AI will be useful, or think about how it will fail, or, you know, doing any of the usual engineering that new technologies inevitably need. You won't need to! The AI will just engineer itself!

This is, of course, bunk.

player1234 · a year ago
Infinite content produced by AI has close to no value.
foogazi · a year ago
It’s can’t be economically sustainable if this is it right ?
brotchie · a year ago
Feels different to past hype cycles (Internet bubble, Crypto bubble).

LLMs with meaningful capabilities arrive very quickly. e.g. One week they were not that useful, the next week they gained meaningful capabilities.

A function that takes text and returns text isn't that useful without it being integrated into products, and this takes time.

Next 12-24 months will be the AIfication of many workflows: that is, discovering and integrating LLM-based reasoning into business processes. Assuming even a gradual improvement in capabilities of LLMs over time, all of these AI enhanced business processes will simply get better.

Diffusion of technology is slow slow slow, and then fast. As I become more capable with AI (e.g. what tasks as an engineer are helped using AI) I'm getting better and better at it. So there's a non-linear learning curve where, as you learn to use the technology better, you can unlock more productivity.

vrighter · a year ago
the aiification of products to me sounds like being made both less reliable and less predictable. Not a good thing
contravariant · a year ago
Honestly I think we're already there, it just takes a bit before the realisation trickles down.

The successful uses of LLMs don't seem to depart too far from the basic chatbot that started the whole hype. And the truly 'magic' uses seem to fail in practice because even a small error rate is way too high for a system that cannot learn from its mistakes (quickly).

GaggiX · a year ago
>don't seem to depart too far from the basic chatbot that started the whole hype.

Is ChatGPT-3.5 a basic chatbot now? It's been less than two years since it was SOTA.

urbandw311er · a year ago
Nicely put
janalsncm · a year ago
I like the distinction between producers and promoters. This is why I am naturally skeptical of polished demos and people posting in their real name. If you post in your real name, you are at a minimum promoting yourself (generally boils down to “I am a smart, employable person”).

I wish I had a better heuristic, but the best I’ve found on Twitter is pseudonymous users with anime profile pics. These are people who don’t care about boosting a product. They’re possibly core contributors to a lesser-known but essential python library. They deeply understand a single thing very well. They don’t post all day because they are busy producing.

larodi · a year ago
these, indeed are the actual accounts worth following, those still bearing some resemblance of the early internet adopters who were there for the fun, not the profit part. though never thought about the name perspective, something I can only agree with you on. which immediately cancels out people such as lex friedman and alex volkov, again - seems like the right thing to do tbh. some very obscure accounts are to me the real opinion leaders, they know how to ride the viral wave on repeat. Grimes doesn't though.
Terr_ · a year ago
Soon: "LLM, take my self-promotional content and rephrase it as if I was a producer."
pajeets · a year ago
I also second anime pfps and other blue checkmarkless accounts on X producing far more grounded takes (not just for AI).

X is a very good microcosm of that producer/promoter model from Dalio except that the promoters are seemingly the entirety and they are extremely loud to the point that it triumphs all common sense and reasoning.

It's also very tiring to scroll through "I made $XXXX in 30 days with AI and I'm only 17 year old high school student" or "we shipped a ChatGPT wrapper and used dark patterns for subs"

On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

All in all, it really feels like the American economy is running on pure hopium and fumes. This cannot be good for it in the long run.

HKH2 · a year ago
> On Linkedin its far worse, everybody is a genius and everybody needs to pay attention to me of the remote chance a recruiter from big tech will reach out and pay me a large salary for managing their impression.

Right. So much content, but it feels so empty. Do people actually network there?

i-cjw · a year ago
> Or take Ed Thorp, who invented the option pricing model and quietly traded on it for years until Black-Scholes published a similar formula and won a Nobel Prize

Hardly quietly. Thorpe published "Beat the Market" in 1967 detailing his formulae, six years before Black Scholes won the Nobel.

zcw100 · a year ago
People warning about a coming AI winter are almost as annoying as people doomsaying about AGI. It’s going to be somewhere in between. It can be disappointing and revolutionary at the same time. We had the dot-com crash and yet out of that grew some of the largest corporations the world. Microsoft, Facebook, Apple, Amazon, etc
janalsncm · a year ago
The article is less about a winter for the field than a winter for AI boosters, who will soon move on to become “experts” in The Next Big Thing.

For people working in the field, deep learning has already proven itself to be self-funding. It’s the main source of Google’s profits. It’s TikTok’s algorithm. Et cetera.

Deleted Comment

threeseed · a year ago
Every one of those companies predates the dot com bomb by quite some time.

And AGI is science fiction with no credible plan of how to get there. If you can even get everyone to agree on the same definition.

An AI winter is something that can be measured and is factual eg. the lacklustre spending on AI products and the dry up in VC funding.

janalsncm · a year ago
Kind of depends on what you call “AI”. Large language models, maybe. AI is a lot more than that though. Deep learning isn’t going anywhere.

The fact that VCs aren’t throwing millions of dollars after every CS undergrad who figured out how to make an API call to OpenAI means they are wising up. The main question is why it took this long.

archgoon · a year ago
Microsoft and Apple preceded the dot com bomb by several decades. (Microsoft 1975, Apple 1976)

Amazon was a company that was around and survived the dot com bomb (founded in 1994, roughly around the time of the beginning of the bubble) [though its stock took about 7 years to recover]

Facebook was post dot com bomb. (founded 2004)

mindcrime · a year ago
And AGI is science fiction with no credible plan of how to get there.

I mean... you can't really have a (strict) plan for how to build something that nobody knows how to build (yet). But that doesn't necessarily mean it's "science fiction". There are credible reasons[1] to believe that AGI will happen - eventually. To me, the biggest question is around timeline, not "will it happen or not". Now granted, that allows for anything from "tomorrow" up to "the heat death of the universe" so you can accuse me of the dodging the issue if you'd like. But I'd bet money on it happening closer to "tomorrow" than "the heat death of the universe".

[1]: among others - the progress on AI that's already been made. And while we may not have AGI, it's hard to deny that we have AI that's a far sight better than what we had in 1956. The other is that, unless you believe in magic, the human brain is an existence proof that human level AGI is achievable on a deterministic machine that operates according to the physical laws of the universe. It would seem to follow then that it should be possible (albeit perhaps very difficult) to achieve that same level of intelligence on some other deterministic machine. And note that even if "Penrose is right" about the brain relying on quantum mechanical phenomenon, there's no particular reason to think that those can't also be mirrored on a human created machine.

Deleted Comment

aiforecastthway · a year ago
The original "AI Winter" was primarily a government funding phenomenon [1]. There was no "bubble" in the private sector. I.e., the winter was the result of responsible people in government realizing the hype was over-extended and standing up for the taxpayer. Progress would be made, eventually, but not in that moment. (Those people were correct, btw.)

> But beneath the surface, there are rampant issues: citation rings, reproducibility crises, and even outright cheating. Just look at the Stanford students who claimed to fine-tune LLaMA3 to have be multimodal with vision at the level of GPT-4v, only to be exposed for faking their results. This incident is just the tip of the iceberg, with arXiv increasingly resembling BuzzFeed more than a serious academic repository.

Completely agreed. Academia is terminally broken. The citation rings don't bother me. Bibliometrics are the OG karma -- basically, fake internet points. Who cares?

The much bigger problem is that those totally corrupt circular influence rings extend into program director positions and grant review committees at federal funding agencies. Most of those people are themselves academics (on leave, visiting, etc.) who depend on money from the exact sources they are reviewing for. So this time is their friends turn, and next time is their turn. And don't dare tell me that this isn't how it works. I've been in too many of those rooms.

It's gotten incredibly bad in in ML in particular. Our government needs to cut these people off. I am sick of my tax money going to these assholes (via the NSF, DARPA, etc.). Just stop funding the entire subfield for a few years, tbh. It's that bad.

On the private sector side, I think that the speculative AI bubble will deflate, but also that some real value is being created and many large institutions are actually behaving quite reasonably compared to previous nonsense cycles. You just have to realize we're mid-late cycle and companies/groups that aren't finding PMF with llm tech in the next 2-3 years are probably not great bets.

--

[1] https://en.wikipedia.org/wiki/Lighthill_report

Animats · a year ago
> There was no "bubble" in the private sector.

There was a small bubble.

There were 1980s AI startups: IntelliCorp and Teknowledge. Intellicorp pivoted from expert systems to UML and was acquired. Teknowledge seems to have disappeared. (The outsourcing company called Teknowledge today seems to be unrelated.) There were the LISP machine companies, Symbolics and LMI. There were a few others, mostly forgotten now.

_19qg · a year ago
The first (big) AI winter he refers to was in the mid 70s.

The 80s (mid/end) AI winter have had an effect on private companies, but it was also mostly because of government funding was reduced/eliminated, where their revenue was coming from. Much of the revenue of the computer hardware and software companies in the 80s AI bubble was coming from government funding, like the Strategic Computing Initiative and the Strategic Defence Initiative ("Star Wars"), both running from 1983 until 1993, with various levels&aims of funding. That was a part of the effort to win the cold war (here by investing huge amounts of money into modern weapons & defense systems, which meant also into computing and AI) with the Soviet Union , which eventually collapsed, end 80s - early 90s. Also many of the promises of the AI technology did not materialize -> the private sector did not take over the funding.

mindcrime · a year ago
An "AI Fall" maybe. But "AI Winter"? I really doubt it. And the author of this piece presents very little in the way of compelling arguments for the advent of said AI Winter.

For all the valid criticisms of "AI"[1] today, it's creating too much value to disappear completely and there's no particular reason[2] to expect progress to halt.

[1]: scare quotes because a lot of people today are mis-using the term "AI" to exclusively mean "LLM's" and that's just wrong. There's a lot more to AI than LLM's.

[2]: yes, I'm aware of neural scaling laws and some related charts showing a slow-down in progress, and the arguments around not having enough (energy|data|whatever) to continue to scale LLM's. But see [1] above - there is more to AI than LLM's.

pinkmuffinere · a year ago
> This is how we’re headed for another AI winter, just as we saw with the fall of data science, crypto, and the modern data stack.

The fall of data science??? When did that happen? I’m not squarely in the field, but I thought I would have heard about it

mindcrime · a year ago
> The fall of data science??? When did that happen?

It didn't. "Data science" may not be the latest, trendy, catchy "buzzword of the day" but nothing holds onto that title forever. Losing that crown to trendy tech du-jour isn't the same as "falling off" IMO.

hu3 · a year ago
Similar phenomenon, on a smaller scale, is happening with what I call meta-cloud PaaS, which facilitates web app deployments/provisioning. They usually run on top of AWS or other large clouds, hence meta-cloud.

It started with Heroku but now it has gained VC attention in the form of Next/Vercel, Laravel Cloud, Void(0), Deno Deploy and Bun-yet-to-be-announced solution. I'm probably forgetting one or two.

Don't get me wrong, they are legit solutions. But the VC money currently being poured in on influencers to push these solutions make them seem much more appealing than they would be otherwise.

grokkedit · a year ago
Heroku has been around for almost 20 years, Vercel was Zeit ~10 years ago, and they both have always been widespread solutions, I wouldn't say that that there is hype only now

I cannot vouch for laravel cloud or void, since I've never used them, nor I will comment on Deno/Bun since they are far more recent