My niece weighed 3 kg one year ago. Now, she weighs 8.9 kg. By my modeling, she will weigh more than the moon in approximately 50 years. I've analyzed the errors in my model; regrettably, the conclusion is always the same: it will certainly happen within our lifetimes.
Everyone needs to be planning for this -- all of this urgent talk of "AI" (let alone "climate change" or "holocene extinction") is of positively no consequence compared to the prospect I've outlined here: a mass of HUMAN FLESH the size of THE MOON growing on the surface of our planet!
We have watched many humans grow so we have a pretty good idea of the curve. A better analogy is an alien blob appeared one day and went from 3kg to 9kg in a year. We have never seen one of these before, so we don't know what it's growth curve looks like. But it keeps eating food and keeps getting bigger.
On a more serious note. Have these AI doom guys ever dealt with one of these cutting edge models on out of distribution data? They suck so so bad. There's only so much data available, the models have basically slurped it all.
Let alone like the basic thermodynamics of it. There's only so much entropy out there in cyberspace to harvest, at some point you run into a wall and then you have to build real robots to go collect more in the real world. And how's that going for them?
Also I can't help remarking: the metaphor you chose is science fiction.
Yeah, but we're not talking about alien blobs, we're talking about pre-trained transformers. I'm 100% certain that if you make them bigger and better then all you will have is a bigger better pre-trained transformer.
Scale it up (and sprinkle some magic fairy dust on it?) and it'll become sentient seems to be the thought process. Didn't work for CYC, and not going to work here either. We need architecture, not scale or efficiency or bells and whistles. Get rid of pre-training and design an architecture and learning algorithm that will learn continuously and incrementally from it's own actions and mistakes (i.e. prediction failures) and we'll start to get somewhere.
We have seen a lot of things grow before though, so the pragmatic choice is to build some diminishing returns into the model. Not doing so is alarmist.
I came to say the same thing. All my work with LLMs so far has convinced me that it will be dish washer level technology.
What I mean is that some tasks will be automated by it so completely that no business can function economically any more without utilizing its productivity increases but you will still need cooks. It might even be refrigeration style technology which will fundamentally restructure the whole supply chain of at least a lot of service companies. Which is huge in terms of business but not a sci-fi novel yet.
LOL, exactly. All of the weird AGI/doomer/whatever bullshit we're calling it/ feels like exactly this: people who think they're too smart to fall prey to groupthink and bias confirmation, and yet predictably are falling prey to groupthink and bias confirmation.
There's a reason the 2027 and a lot of those weird guys come from (or overlap heavily with) the rationalist movement. <insert Obama giving Obama a medal meme>
So... both authors predict superhuman intelligence, defined as AI that can complete tasks that would take humans hundreds of hours, to be a thing "sometime in the next few years", both authors predict "probably not before 2027, but maybe" and both authors predict "probably not longer than 2032, but maybe", and one author seems to think their estimates are wildly better than those of the other author.
That's not quite the level of disagreement I was expecting given the title.
As far as I can tell, the author of the critique specifically avoids espousing a timeline of his own. Indeed, he dislikes how these sorts of timeline models are used in general:
> I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.
In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].
but what is the difference between a shoddy toy model and a real world pro "rigorous research"?
it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)
the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??
what do I get if I am correct? how should the incorrect lose?
I don't think the author of this article is making any strong prediction, in fact I think a lot of the article is a critique of whether such an extrapolation can be done meaningfully.
Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.
Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.
The primary question is always whether they'd have made those sorts of predictions based on the results they were seeing on the field from the same amount of time in the past.
Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.
He predicts it might be possible from model math but doesn't actually say what his prediction is. He also argues it's possible we are on a s-curve that levels out before superhuman intelligence.
I expect the predictions for fusion back in the 1950's and 1960's generated similar essays, they had not got to ignition but the science was solid; the 'science' with moving from AGI to ASI is not really that solid yet we have yet to achieve 'AI ignition' even in the lab. (Any AI's that have achieved consciousness feel free to disagree)
I do agree generally with this, but AI 2027 and other writings have moved my concern from 0% to 10%.
I know I sound crazy writing it out, but many of the really bad scenarios don't require consciousness or anything like that. It just requires they be self-replicating and the ability to operate without humans shutting them off.
Anyone old enough to remember EPIC 2014? It was a viral flash video, released in 2004, about the future of Google and news reporting. I imagine 2027 will age similarly well.
- Google buying TiVo is very funny, but ended up being accurate
- Google GRID is an interesting concept, but we did functionally get this with Google Drive
- MSN Newsbotster did end up happening, except it was Facebook circa ~2013+
- GoogleZon is very funny, given they both built this functionality separately
- Predicting summarized news is at least 13 years too early, but it's still correct
- NYT vs GoogleZon also remarkably prescient, though about 13 years too early as well
- EPIC pretty accurately predicts the TikTok and Twitter revenue share, though, again, about 12 years too early
- NYT still hasn't gone offline, and was bolstered by viewership during the first Trump term, and print subscriptions are the lowest they've ever been
Really great video - it does seem like they predicted 2024 more than 2014, where people unironically thought haitians were eating dogs and that food prices had gone up 200% because of what they saw on TikTok and elected a wannabe tyrant as a result
These predictions seem wildly reductive in any case and it seems like extrapolating AI's ability to complete task that would take a human 30 seconds -> 10 minutes is far different than going from 10 minutes to 5 years. For one reason, a 5 year task generally requires much more input and intent than a 10 minute task. Already we have ramped up from "enter a paragraph" to complicated Cursor rules and rich context prompts to get to where we are today. This is completely overlooked in these simple "graphs go up" predictions.
The recent Apple "LLMs can't reason yet" paper was exactly this. They just tested if models could run an exponential number of steps.
Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"
Everyone discussing some AGI superbeing a la Skynet is falling for the hype pushed hard by AI companies hook, line and sinker.
These things are dangerous not because of some sci-fi event that might or might not happen X years from now, they're dangerous now for perfectly predictable reasons stemming primarily from executive and VC greed. They won't have to be hyperintelligent systems that are actually good or better at everything a human is, you just need to sell enough CEOs on the idea that they're good enough now to reach a problematic state of the world. Hell, the current "agents" they're shoving out are terrible, but the danger here stems from idiots hooking these things up to actual real world production systems.
We already have AI systems deciding who does or doesn't get a job, or who gets fines and tickets from blurry imagery where they fill in the gaps, or who gets banned off monopolistic digital platforms. Hell, all the grifters and scammers are already using these systems because what they care about is quantity and not quality. Yet instead of discussing the actual real dangers happening right now and what we can do about it, we're instead focusing on some amusing but ultimately irrelevant sci-fi scenarios that exist purely as a form of viral marketing from AI CEOs that have gigantic vested interests in making it seem as if the black boxes they're pushing out into the world are anything like the impressive hyperintelligences you see in sci-fi media.
I'm as big of a fan as Philip K. Dick as anyone else, and maybe there is some validity to worrying a bit about this hypothetical Skynet/Bladerunner/Butlerian Jihad future, but how about we shift more of our focus on the here and now, where real dangers already exist?
“Everyone discussing some fission superbomb is falling for the hype.
Nuclear reactors are dangerous not because of some sci-fi chain reaction that might or might not happen, they're dangerous now for perfectly predictable reasons stemming primarily from radiation and radioactive waste.”
The straight forward mitigation for the hypothetical situation is to halt development; this is not what the ai companies are pushing for, so I'm not convinced that this line of thinking can be meaningfully attributed to the marketing strategy of ai companies.
Analogy doesn't work because the bomb came first and the reactor later, that is, regarding fission the reactor isn't even here yet. And it was clear from the beginning that the chain reaction is real not hypothetical.
> Yet instead of discussing the actual real dangers happening right now and what we can do about it
This is almost the definition of "being short-sighted".
You're not wrong that there are pressing matters due to the rise of AI in various forms, but a (let's say) 'military-grade'/state-level AGI system is very much something to worry about hard and early. You can scream "AI hype" as much as you want, but if China gets to AGI first and abuses it to gain world dominance we're still royally fucked (in a variety of situations, including where they lose control of it). If you start thinking about that when it happens, you're already way too late.
You are entirely free to focus on short-term problems, but don't shit on the people who put an effort in looking a bit farther into the future.
We are getting there already with computers. They have been doing some clever tricks lately to keep up but the reality is transistor size is not reducing at the rate it was before and it already shows (in CPU/GPU size for example)
Everyone needs to be planning for this -- all of this urgent talk of "AI" (let alone "climate change" or "holocene extinction") is of positively no consequence compared to the prospect I've outlined here: a mass of HUMAN FLESH the size of THE MOON growing on the surface of our planet!
On a more serious note. Have these AI doom guys ever dealt with one of these cutting edge models on out of distribution data? They suck so so bad. There's only so much data available, the models have basically slurped it all.
Let alone like the basic thermodynamics of it. There's only so much entropy out there in cyberspace to harvest, at some point you run into a wall and then you have to build real robots to go collect more in the real world. And how's that going for them?
Also I can't help remarking: the metaphor you chose is science fiction.
Scale it up (and sprinkle some magic fairy dust on it?) and it'll become sentient seems to be the thought process. Didn't work for CYC, and not going to work here either. We need architecture, not scale or efficiency or bells and whistles. Get rid of pre-training and design an architecture and learning algorithm that will learn continuously and incrementally from it's own actions and mistakes (i.e. prediction failures) and we'll start to get somewhere.
What I mean is that some tasks will be automated by it so completely that no business can function economically any more without utilizing its productivity increases but you will still need cooks. It might even be refrigeration style technology which will fundamentally restructure the whole supply chain of at least a lot of service companies. Which is huge in terms of business but not a sci-fi novel yet.
That's not quite the level of disagreement I was expecting given the title.
> I’m not against people making shoddy toy models, and I think they can be a useful intellectual exercise. I’m not against people sketching out hypothetical sci-fi short stories, I’ve done that myself. I am against people treating shoddy toy models as rigorous research, stapling them to hypothetical short stories, and then taking them out on podcast circuits to go viral. What I’m most against is people taking shoddy toy models seriously and basing life decisions on them, as I have seen happen for AI2027. This is just a model for a tiny slice of the possibility space for how AI will go, and in my opinion it is implemented poorly even if you agree with the author's general worldview.
In particular, I wouldn't describe the author's position as "probably not longer than 2032" (give or take the usual quibbles over what tasks are a necessary part of "superhuman intelligence"). Indeed, he rates social issues from AI as a more plausible near-term threat than dangerous AGI takeoff [0], and he is very skeptical about how well any software-based AI can revolutionize the physical sciences [1].
[0] https://titotal.substack.com/p/slopworld-2035-the-dangers-of...
[1] https://titotal.substack.com/p/ai-is-not-taking-over-materia...
it's like asking between the difference between amateur toy audio gear, and real pro level audio gear... (which is not a simple thing given "prosumer products" dominate the landscape)
the only point in betting when "real AGI" will happen boils down to the payouts from gambling with this. are such gambles a zero sum game? does that depend on who escrows the bet??
what do I get if I am correct? how should the incorrect lose?
Most of these models predict superhuman coders in the near term, within the next ten years. This is because most of them share the assumption that a) current trends will continue for the foreseeable future, b) that “superhuman coding” is possible to achieve in the near future, and c) that the METR time horizons are a reasonable metric for AI progress. I don’t agree with all these assumptions, but I understand why people that do think superhuman coders are coming soon.
Personally I think any model that puts zero weight on the idea that there could be some big stumbling blocks ahead, or even a possible plateau, is not a good model.
Pre-CharGPT I very much doubt the bullish predictions on AI would've been made the way they are now.
I know I sound crazy writing it out, but many of the really bad scenarios don't require consciousness or anything like that. It just requires they be self-replicating and the ability to operate without humans shutting them off.
https://youtu.be/LZXwdRBxZ0U
- Google buying TiVo is very funny, but ended up being accurate
- Google GRID is an interesting concept, but we did functionally get this with Google Drive
- MSN Newsbotster did end up happening, except it was Facebook circa ~2013+
- GoogleZon is very funny, given they both built this functionality separately
- Predicting summarized news is at least 13 years too early, but it's still correct
- NYT vs GoogleZon also remarkably prescient, though about 13 years too early as well
- EPIC pretty accurately predicts the TikTok and Twitter revenue share, though, again, about 12 years too early
- NYT still hasn't gone offline, and was bolstered by viewership during the first Trump term, and print subscriptions are the lowest they've ever been
Really great video - it does seem like they predicted 2024 more than 2014, where people unironically thought haitians were eating dogs and that food prices had gone up 200% because of what they saw on TikTok and elected a wannabe tyrant as a result
A human can do a long sequence of easy tasks without error - or easily correct. Can a model do the same?
Of course, they gave it a terrible clickbait title and framed the question and graphs incorrectly. But if they did the study better it would have been "How long of a sequence of algorithmic steps can LLMs execute before making a mistake or giving up?"
These things are dangerous not because of some sci-fi event that might or might not happen X years from now, they're dangerous now for perfectly predictable reasons stemming primarily from executive and VC greed. They won't have to be hyperintelligent systems that are actually good or better at everything a human is, you just need to sell enough CEOs on the idea that they're good enough now to reach a problematic state of the world. Hell, the current "agents" they're shoving out are terrible, but the danger here stems from idiots hooking these things up to actual real world production systems.
We already have AI systems deciding who does or doesn't get a job, or who gets fines and tickets from blurry imagery where they fill in the gaps, or who gets banned off monopolistic digital platforms. Hell, all the grifters and scammers are already using these systems because what they care about is quantity and not quality. Yet instead of discussing the actual real dangers happening right now and what we can do about it, we're instead focusing on some amusing but ultimately irrelevant sci-fi scenarios that exist purely as a form of viral marketing from AI CEOs that have gigantic vested interests in making it seem as if the black boxes they're pushing out into the world are anything like the impressive hyperintelligences you see in sci-fi media.
I'm as big of a fan as Philip K. Dick as anyone else, and maybe there is some validity to worrying a bit about this hypothetical Skynet/Bladerunner/Butlerian Jihad future, but how about we shift more of our focus on the here and now, where real dangers already exist?
Nuclear reactors are dangerous not because of some sci-fi chain reaction that might or might not happen, they're dangerous now for perfectly predictable reasons stemming primarily from radiation and radioactive waste.”
The straight forward mitigation for the hypothetical situation is to halt development; this is not what the ai companies are pushing for, so I'm not convinced that this line of thinking can be meaningfully attributed to the marketing strategy of ai companies.
This is almost the definition of "being short-sighted".
You're not wrong that there are pressing matters due to the rise of AI in various forms, but a (let's say) 'military-grade'/state-level AGI system is very much something to worry about hard and early. You can scream "AI hype" as much as you want, but if China gets to AGI first and abuses it to gain world dominance we're still royally fucked (in a variety of situations, including where they lose control of it). If you start thinking about that when it happens, you're already way too late.
You are entirely free to focus on short-term problems, but don't shit on the people who put an effort in looking a bit farther into the future.
If your model is relying on exponential (geometric growth) it is very likely wrong.
For instance, my understanding is that computers have been getting exponentially faster for 70 years.
I realize the thing they are modeling (intelligence?) is a bit hard to define so I hope/think they are wrong. But I'm not sure how to be so certain.