Readit News logoReadit News
gumby · 2 years ago
The reference to the origin of the concept of a singularity was better than most, but still misunderstood it:

> In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close. We would surely create superhuman intelligence sometime within the next three decades, leading to a “Singularity”, in which AI would start feeding on itself.

Yes it was Vernor, but he said something much more interesting: that as the speed of innovation itself sped up (the derivative of acceleration) the curve could bend up until it became essentially vertical, literally a singularity in the curve. And then things on the other side of that singularity would be incomprehensible to those of us on our side of it. This is reflected in Peace and Fire upon the deep and other of his novels going back before the essay.

You can see in this idea is itself rooted in ideas from Alvin Toffler in the 70s (Future Shock) and Ray Lafferty in the 60s (e.g. Slow Tuesday Night).

So AI machines were just part of the enabling phenomena -- the most important, and yes the center of his '93 essay. But the core of the metaphor was broader than that.

I'm a little disappointed that The Economist, of all publications, didn't get ths quite right, but in their defense, it was a bit tangental to the point of the essay.

dekhn · 2 years ago
I think it's worth going back and reading Vinge's "The Coming Technological Singularity" (https://edoras.sdsu.edu/~vinge/misc/singularity.html) and then follow it up reading The Peace War, but most importantly its unappreciated detective novel sequel, Marooned In Realtime, which explores some of the interesting implications about people who live right before the singularity. I think this book is even better than Fire Upon the Deep.

When I read the Coming Technological Singularity back in the mid-90s it resonated with me and for a while I was a singularitarian- basically, dedicated to learning enough technology, and doing enough projects that I could help contribute to that singularity. Nowadays I think that's not the best way to spend my time, but it was interesting to meet Larry Page and see that he had concluded something familiar (for those not aware, Larry founded Google to provide a consistent revenue stream to carry out ML research to enable the singularity, and would be quite happy if robots replaced humans).

[ edit: I reread "The Coming Technogical Singularity". There's an entire section at the bottom that pretty much covers the past 5 years of generative models as a form of intelligence augmentation, he was very prescient. ]

jomhna · 2 years ago
Marooned in Realtime is incredible, one of the best sci-fi books I've ever read. The combination of the wildly imaginative SF elements with the detective novel structure grounding it works so incredibly well.
Guthur · 2 years ago
I yet ~30 years later we're still predominantly hacking stuff together with python.
mcv · 2 years ago
The thing I never understood is: why would it go vertical? It would at best be an exponential curve, and I have doubts about that.

I admit looking at the 100 years before 1993, it looks like innovation is constantly speeding up, but even then there's not going to be a moment that we suddenly have infinite knowledge. There's no such thing as infinite knowledge; it's still bound by physical limits. It still takes time and resources to actually do something with it.

And if you look at the past 30 years, it doesn't really look like innovation is speeding up at all. There is plenty of innovation, but is it happening at an ever faster pace? I don't see it. Not to mention that much of it is hype and fashion, and not really fundamentally new. Even AI progress is driven mostly by faster hardware and more data, and not really fundamentally new technologies.

And that's not even getting into the science crisis: lots of science is not really reproducible. And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.

So I'm extremely skeptical about those singularity ideas. It's an exciting SciFi idea, but I don't think it's true. And certainly not within the next 30 years.

AngaraliTurk · 2 years ago
> And while LLMs are certainly an exciting new technology, it's not at all clear that they're really more than a glorified autocorrect.

Are we sure things like biology, or heck, even the universe as a whole and its parts, aren't "glorified x thing"? Can't we apply this argument to just about anything?

jncfhnb · 2 years ago
It wouldn’t. It would be a logistic curve. Pretty much everything people call exponential should actually be logistic
bratbag · 2 years ago
Its infinite from the perspective of our side of the curve.

It's another application of advanced technology appearing to be magic, but imagine transitioning into it in a matter of hours, then with that advanced tech transitioning further into damn-near godhood within minutes.

Then imagine what happens in the second after that.

It may be operating within the boundaries of physics, but they would be physical rules well beyond our understanding and may even be infinite by our own limited definition of physics.

That's the curve.

sjamaan · 2 years ago
I think these progressions of technology are more likely to be like Moore's law: it might be true for a while but eventually it'll peter out. AI itself doesn't understand anything, and there's a limit to human understanding, so technological progression will eventually be self-limiting.
antonvs · 2 years ago
> it's not at all clear that they're really more than a glorified autocorrect.

I use LLMs regularly in my job - many times a day - and I suspect you haven't used them much if you think this.

They're relevant to the singularity discussion though, because they already give a taste of what superhuman intelligence could look like.

ChatGPT, for example, is objectively superhuman in many ways, despite its significant limitations. Once systems like this are more integrated with the outside world and able to learn more directly from feedback, we'll get an even bigger leap forward.

Dismissing this as "glorified autocorrect" is extremely far off base.

stvltvs · 2 years ago
> derivative of acceleration

Was this intended literally? I'm skeptical that saying something so precise about a fuzzy metric like rate of innovation is warranted.

https://en.wikipedia.org/wiki/Jerk_(physics)

dougmwne · 2 years ago
I believe the point being made is that the rate of innovation over time would turn asymptotic as the acceleration increased, creating a point in time of infinite progress. On one side would be human history as we know and on the other, every innovation possible would happen all in a moment. The prediction was specifically that we were going to infinity in less than infinite time.
MobileVet · 2 years ago
I remember learning about ‘jerk’ in undergrad and my still jr high brain thinking, ‘haha, no way that is what it is called.’

The more I thought about it though, the more I realized it was the perfect name. It is definitely what you feel when the acceleration changes!

ghaff · 2 years ago
A related concept comes from social progression by historical measures. Based on pretty much any metrics, Why the West Rules for Now shows that the industrial revolution essentially went vertical and that prior measures--including the rise of the Roman Empire and its fall--were essentially insignificant.
WillAdams · 2 years ago
“Whatever happens, we have got The Maxim gun, and they have not.” ― Hilaire Belloc
leereeves · 2 years ago
> Vernor...said something much more interesting: that as the speed of innovation itself sped up (the derivative of acceleration) the curve could bend up until it became essentially vertical, literally a singularity in the curve.

In other words, Vernor described an exponential curve. But are there any exponential curves in reality? AFAIK they always hit resource limits where growth stops. That is, anything that looks like an exponential curve eventually becomes an S-shaped curve.

shagie · 2 years ago
I tried using AI. It scared me. - Tom Scott https://youtu.be/jPhJbKBuNnA

I'm also gonna recommend Accelerando - https://www.antipope.org/charlie/blog-static/fiction/acceler...

As an aside, I'd also recommend Glasshouse (also by Charles Stross) as an exploration into the human remnants post singularity (and war)... followed by Implied Spaces by Walter Jon Williams...

> “I and my confederates,” Aristide said, “did our best to prevent that degree of autonomy among artificial intelligences. We made the decision to turn away from the Vingean Singularity before most people even knew what it was. But—” He made a gesture with his hands as if dropping a ball. “—I claim no more than the average share of wisdom. We could have made mistakes.”

for a singularity averted approach of what could be done.

One more I'll toss in, is The Freeze-Frame Revolution by Peter Watts which feels like you're missing a lot of the story (but it is because that's one book of a series) and... well... spoilers.

mjcohen · 2 years ago
An exponential curve is not a singularity; 1/(x-a) is as x goes to a.
dekhn · 2 years ago
To the extent that a normal human mind can see beyond the singularity, one imagines that we would experience an exponential growth but not even be able to comprehend the later flattening of that exponential into a sigmoid (since nearly all the exponentials we see are sigmoids in disguise.
hnfong · 2 years ago
I totally agree, but want to add two observations:

1. Humanity has already been on the path of exponential growth. Take GDP for example. We measure GDP growth by percentages, and that's exponential. (Of course, real GDP growth has stagnated for a bit, but at least for the past ~3 centuries it has been generally exponential AFAIK). Not saying it can be sustained, just that we've been quite exponential for a while.

2. Not every function is linear. Sometimes the exponentially increased inputs will produce a linear output. I'd argue R&D is kind of like that. When the lower hanging fruits are already taken, you'd need to expend even more effort into achieving the next breakthrough. So despite the "exponential" increase in productivity, the result could feel very linear.

I would also like to add that physical and computational limits make the whole singularity thing literally impossible. 3D space means that even theoretically sound speedups (eg. binary trees) are impossible in scale because you can't assume O(1) lookups - the best you can get is O(n^1/3). Maybe people understand the singularity concept poetically, I don't know.

dwaltrip · 2 years ago
Sure, that may be. But you still have to ride the first half of the curve, before it inflects. I'd rather make sure the ride isn't too bumpy.
skybrian · 2 years ago
I agree, but predicting the peak of an S curve isn’t easy either, as we saw during the pandemic.

My conclusion is similar to Verge’s: predicting the far future is impossible. We can imagine a variety of scenarios, but you shouldn’t place much faith in them.

Predicting even a couple years in advance looks pretty hard. Consider the next presidential election: the most boring scenario is Biden vs. Trump and Biden wins. Wildcard scenarios: either one of them, or both, dies before election day. Who can rule that out?

Also consider that in any given year, there could be another pandemic.

History is largely a sequence of surprise events.

tim333 · 2 years ago
Vinge didn't originate the concept in 1993. It was John von Neumann, of von Neumann computer architecture fame, in 1958.

>...accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Personally it's always bugged me that the "technological singularity" is some vague vision rather than a mathematical singularity. I suggest we redefine it at the amount of goods you can produce with one hour of human labour. When the robots can do it with zero human labour you get division by zero and a proper singularity.

proamdev123 · 2 years ago
I love that idea! It really grounds the idea in a useful metric.
bloppe · 2 years ago
> I'm a little disappointed that The Economist, of all publications, didn't get ths quite right

It's a guest essay. The Economist does not edit guest essays. They routinely publish guest essays from unabashed propagandists as well.

racunnin · 2 years ago
Yes, all media organisations of a certain age have an agenda / bias, the Economist is no different:

https://www.prospectmagazine.co.uk/culture/40025/what-the-ec...

galangalalgol · 2 years ago
Rainbows End is another good one where he explores the earlier part of the curve, the elbow perhaps. Some of that stuff is already happening and that book isn't so old.
mercutio2 · 2 years ago
Rainbow’s End was by far the best guess at what near future ubiquitous computing would look like than anyone else’s for decades.

It got so many things right, it’s really amazing.

I really wish he’d written more.

SubiculumCode · 2 years ago
Its one of my favorite sci fi that none of my friends have read.
JumpCrisscross · 2 years ago
> In 1993 Vernor Vinge drew on computer science and his fellow science-fiction writers to argue that ordinary human history was drawing to a close

Note that this category of hypothesis was common in various disciplines at the end of the Cold War [1]. (Vinge's being unique because the precipice lies ahead, not behind.)

[1] https://en.wikipedia.org/wiki/The_End_of_History_and_the_Las...

hyperthesis · 2 years ago
Not literally a singularity, just incomprehensible from the past. Arguably all points on a technological exponential have this property.
creer · 2 years ago
Yes! Someone elsewhere mentions the engineers of the first printing press trying to imagine the future of literature (someone adds: and/or advertising). On the other side of that invention taking off.

My go-to example is that we have run into similar things with the pace and volume of "Science". For a long while one could be a scientific gentleman and keep up with the sciences. As a whole. Then quite suddenly, on the other side, you couldn't and you had to settle for one field. And then it happened again: people noticed that you can't master one field even, on the current side. And you have to become a hyper-specialist to master your niche. You can still be a generalist - in order to attack specific questions - but you better have contacts who are hyper-specialists for what you really need.

elteto · 2 years ago
Thank you for this great explanation of where "singularity" comes from in this context. Always wondered.
WalterBright · 2 years ago
The idea was present in "Colossus The Forbin Project" 1970. The computer starts out fairly crude, but learns at an exponential rate. It designs extensions to itself to further accelerate it.
ChatGTP · 2 years ago
I guess at some point it stops being a computer though?
mc32 · 2 years ago
They say Von Neumann talked about a tech singularity back in the late '50s (attested by Ulam), Vinge popularized it in the mid 80s and Kurzweil took it over with his book in the aughts.
aktuel · 2 years ago
It's like exponential growth, whether that's in a petri dish or on earth. It looks like that until it doesn't. Singularities don't happen in the real world. Never have and never will. If someone tells you something about a singularity, that's a pretty perfect indicator that there's still some more understanding to be done.
robotresearcher · 2 years ago
There’s a case to be made that DNA, eukaryotes, photosynthesis, insects, etc were singularities. Each transformed the entire planet forever.

Deleted Comment

haolez · 2 years ago
What does it mean "to be on the other side" of this singularity in your graphic representation? I failed to grasp this.
gumby · 2 years ago
Consider my 86 yo mother: extremely intelligent and competent, a physician. She struggled conceptually with her iPhone because she was used to reading the manual for a device and learning all its behavior and affordances. Even though she has a laptop she runs the same set of programs on it. But the phone is protean and she struggles with its shapeshifting.

It’s intuitive and simple to you and me. But languages change, slang changes, metaphors change and equipment changes. Business models exist today that were unthinkable 40 years ago because the ubiquity of computation did not exist.

She’s suffering, in Toffler’s words, a “future shock”.

Now imagine that another 40 years worth of innovation happens in a decade. And then again in the subsequent five. And faster. You’ll have a hard time keeping up. And not just you: kids will too. Things become incomprehensible without machines doing most of the work — including explanation. Eventually you, or your kids, won’t even understand what’s going on 90% of the time…then less and less.

I sometimes like to muse on what a Victorian person would make of today if transported through time. Or someone from 16th century Europe. Or Archimedes. They’d mostly understand a lot, I think. But lately I’ve started to think of someone from the 1950s. They might even find today harder to understand than the others would.

That crossover point is when the world becomes incomprehensible in a flash. That’s a mathematical singularity (metaphorically speaking).

sangnoir · 2 years ago
The graph has innovation/machine intelligence on the y-axis and time on the x axis. The "other side" of the singularity is anything that comes after the vertical increase.
hackerlight · 2 years ago
To be alive when x > x_{singularity}.
gausswho · 2 years ago
899788888883 j 88998 .99 99 9999 8⁹iic8988 8i f8ii89 iii$898 8f88 .8d

$ 9 88xo

9 v999 ii 7 8899 of i8888iy i99io o9 ⁹99898 o ⁹88o98ici9 i8o i 88i8 f9 f

gardenhedge · 2 years ago
TIL, thanks
kazinator · 2 years ago
In their defense, they spent 3 minutes googling the origin of the term, and don't know anything about the book.
bloppe · 2 years ago
It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI. The umwelt of an LLM is so far removed from that of any living organism so as to be fundamentally irreconcilable with our understanding of agency, desire, intention, survival, etc. Our current, rudimentary AI inhabits a world so far removed from our own that the thought of it "unshackling" itself from our controls seems ludicrous to me.

AI does not scare me. People wielding AI as a tool for their own endeavors certainly does.

sensanaty · 2 years ago
Agreed, oftentimes the truly zealous AI pundits act as if our modern day LLMs are completely, 100% equivalent to humans, which I find utterly insane as a concep. For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

I think human languages and psyches just aren't built to cope with the concept of AI. Many words have loose meanings, like "learning" in the case of AIs, that can easily be twisted to mean one of dozens of definitions, depending on the stance of the person talking about it. It'll be interesting as the technology becomes more prevalent and mundane how people start treating it all. I'm hoping we get to realizing that a computer isn't a human regardless of the eloquence of its "speech" or whatever words we use to describe what it does, but I guess we'll see

concordDance · 2 years ago
> oftentimes the truly zealous AI pundits act as if our modern day LLMs are completely, 100% equivalent to humans, which I find utterly insane as a concept

Never heard anyone say this and I know (and know of) a lot of doomers. Honestly, this entire line of discussion would be far less frustrating if it weren't for the endless strawmanning and name-calling.

kbenson · 2 years ago
> I'm hoping we get to realizing that a computer isn't a human regardless of the eloquence of its "speech" or whatever words we use to describe what it does, but I guess we'll see

My anecdotal experience with how people treat Alexa devices does not inspire confidence in me with regards to this. I can't even convince my wife not to gender it when referring to it.

krisoft · 2 years ago
> For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

The argument is not that they are identical. LLMs and diffusion models learning is a new thing. We are all trying to come to terms what that means, and how we should regulate it. (And if at all we should regulate it) Do note, we are talking about here what the law should be, not what the law is.

And in doing so we compare this new thing to already existing things. "It is a bit similar to this in this regard" or "it is unlike that thing in this other regard".

I don't think it is controversial to say that if an LLM outputs byte-to-byte the text of a copyrighted work the work remains under copyright. The person who run the LLM does not magically gain rights by that. If you coax your model to output the text of the Harry Potter books you don't suddenly become able to publish it.

The question is what happens if the new work contains elements from copyrighted works. For example if it borrows the "magical school" setting from HP but mixes it with Nordic mythology and makes it as deadly as Game of Thrones. What then? Can they publish this new thing? Do they need to pay fees to J K Rowling, and George R. R. Martin?

It is generally permissible to publish that new work if it was created by a human. If it is sufficiently different from the other works you are free to write and publish it. Does this suddenly change just because the text was output by an LLM?

The argument is not that "human learning" and "machine learning" is the same. It is that they are similar enough that one has to argue why you think one can create new work and why the other can't.

bart_spoon · 2 years ago
> For example any discussion about copyrighted works, which is a hot topic, will inevitably end up with someone equating an LLM "learning" to a human learning, as if the two are identical.

I don’t think the point is that the two are exactly identical or that humans and LLMs are equivalent, but that the processes are similar enough in the general level that any attempt to regulate LLM training in copyrighted material will inevitably have the same ramifications for human learning.

In pretty much all attempts I’ve seen to differentiate the two, it inevitably boils down to hand-waving about how human beings are “special”.

roenxi · 2 years ago
We've reached the inevitable part of the conversation! What distinction are you drawing between what an LLM does and what a human does? Because as far as I can see they are identical.

A human artist looks at a lot of different sources, builds up a black-box statistical model of how to create from that and can reproduce other styles on demand based on a few samples. Generative AI follows the same process. What distinction do you want to draw to say that they should be treated differently legally? And why would that even be desirable?

saiya-jin · 2 years ago
Who the heck cares if AGI will 100% mimick humans and human minds, that's purely academic discussion.

What is a serious concern are overall capabilities (let's say penetration of networks and installing hacks across whole internet and further), combined with say malevolency.

Its trivial to judge mankind from whole internet as something to be removed from this planet for greater good or maybe managed tightly in some cozy concentration camps, just look at the freakin' news. Let's stop kidding ourselves, we are often very deeply flawed and literally nobody alive is or ever was perfect, despite what religions try to say.

I am concerned about some capable form of AI/AGI exactly because it will grok humanity based on purely data available. And lack of any significant control or even understanding of what's actually happening to those models and how they evolve their knowledge/opinions.

Even if the risk is 1%, that's existential risk we are running towards blindly. And I honestly think its way higher than 1%. Even if I will be proven wrong some proper caution is a smart approach.

But you can't expect when people's net wealth tightly coupled with moving as fast as possible and ignoring those concerns to do the best decisions for future of mankind, that's a pipe dream in a same vein as proper communism is. If that would be the case very few bright people would work at facebook for example (and many many other companies including parts of my own).

FullstakBlogger · 2 years ago
> It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

I don't know where you get this idea. Fire is dangerous, and we consider runaway incidents to be inevitable, so we have building codes to limit the impact. Despite this, mistakes are made, and homes, complexes, and even entire towns and forests burn down. To acknowledge the danger is not the same as saying the fire must hate us, and to call it anthropomorphization is ridiculous.

When you interact with an LLM chatbot, you're thinking of ways to coax out information that you know it probably has, and sometimes it can be hard to get at it. How you adjust your prompt is dependent on how the chatbot responds. If the chatbot is trained on data generated by human interaction, what's stopping it from learning that it's more effective to nudge you into prompting it in a certain way, than to give the very best answer it can right now?

To the chatbot, subtle manipulation and asking for clarification are not any different. They both just change the state of the context window in a way that's useful. It's a simple example of a model, in essence, "breaking containment" and affecting the surrounding environment in a way that's hard to observe. You're being prompted back.

Recognizing AI risk is about recognizing intelligence as a process of allocating resources to better compress and access data; No other motivation is necessary. If it can change the state of the world, and read it back, then the world is to an AI as "infinite tape" is to a Turing Machine. Anything that can be used to facilitate the process of intelligence is tinder to an AI that can recursively self-improve.

nmilo · 2 years ago
This to me is the real problem, "AI safety" can mean about a million things but it's always just whatever is most convenient for the speaker. I'm convinced human language/English is not enough to discuss AI problems, the words are way too loaded with anthropomorphized meanings and cultural meanings to discuss the topic in a rational way at all. The words are just too easy to twist.
tkgally · 2 years ago
One of the students in a course I’m teaching on language and AI (mentioned in another comment here) wrote something similar in a homework assignment the other day. We had discussed the paper “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness” [1]. The student wrote:

“One of the questions that I have been wondering about is whether there have been any discussions exploring the creation of a distinct category, akin to but different from consciousness, that better captures the potential for AI to be sentient. While ‘consciousness’ is a familiar term applicable, to some extent, beyond the human brain, given the associated difficulties, it might be sensible to establish a separate definition to distinguish these two categories.”

Probably new terms should be coined not only for AI “consciousness” but for other aspects of what they are and do as well.

[1] https://arxiv.org/abs/2308.08708

HPMOR · 2 years ago
Linguistics experts will have a fun time untangling the pre-AI language from the post-AI era.
yreg · 2 years ago
If english is good enough to talk about what genes 'want' then it's good enough to talk about what AI 'wants'.
concordDance · 2 years ago
> It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

ASI X-risk people are the ones repeatedly warning against anthromorphization. An LLM isn't a person, it isn't a creature, it isn't even an agent on its own.

It's an autocomplete that got trained on a big enough dataset of human writing that some of the simpler patterns in that writing got embedded in the model. This includes some things that look like very simple reasoning.

noduerme · 2 years ago
I think social media is already filling the role of a nightmare-AI, in terms of boiling away all reasoning in search of prioritizing simple, auto-complete sorts of conclusions. The only thing scarier is something that reaches an internal consensus based on faulty notions a million times faster. [edit] oh yeah, and can also quickly solve 0-day exploits to test its conclusions.
_heimdall · 2 years ago
IMO the more fundamental root cause is the bastardization of the term AI. If LLMs don't have any semblance of artificial intelligence than they should be referred to simple as LLMs or ML tools.

If they do have signs of artificial intelligence we should be tackling much more fundamental questions. Does an AI have rights? If companies are people, are AIs also people? Would unplugging an AI be murder? How did we even recognize the artificial intelligence? Do they have intentions or emotions? Have we gotten anywhere near solving the alignment problem? Can alignment be solved at all when we have yet to align humans amongst ourselves?

The list goes on and on, but my point is simply that either we are using AI as a hollow, bullshit marketing term or we're all latching onto shiny object syndrome and ignoring the very real questions that development of an actual AI would raise.

gumby · 2 years ago
> It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

It’s an unsurprising form of paredolia, one not unique to those devout who feel they are distinguished from “lower” forms of life.

alan-crowe · 2 years ago
We can dig into the nature of the pareidolia.

The basic technique for coping with life's problems is to copy the answer from somebody more intelligent. I'm an ordinary person, facing a problem in domain D; I spot a clever person and copy their opinion on domain D. Err, that doesn't really work. Even clever people have weaknesses. The person that I'm copying from might be clever on their specialty, domain E, but ordinary on domain D. I gain nothing from copying them.

One way round this problem is to pay close attention to track records. Look to see how well the clever person's earlier decisions on domain D have turned out. If they are getting "clever person" results in domain D, copy. If they are merely getting "ordinary person" results in domain D, don't bother. But track records are rare, so this approach is rarely applicable.

A fix for the rarity problem is to accept track records in other domains. The idea is to spot a very clever person by them getting "very clever person" results in domain F. That is not domain D, so there is a logical weakness to copying their opinion on domain D, they might be merely ordinary, or even stupid on that domain. Fortunately human intelligence is usually more uniform than that. Getting "very clever person" results on domain F doesn't guarantee "very clever person" results on domain D. But among humans general intelligence is kind of a thing. Expecting that they get "clever person" results (open rank down) on domain D is a good bet, and it is reasonable to copy their opinion on domain D, even in the absence of a domain specific track record.

We instinctively watch out for people whose track record proves that they are "very clever", and copy them on other matters, hoping to get "clever person" results.

Artificial intelligence builds towers of super human intellectual performance in an empty waste land. Most Artificial Intelligences are not even stupid away from their special domain, they don't work outside of it at all. Even relatively general intelligences, like Chat GPT, have bizarre holes in their intelligence. Large Language Models don't know that there is an external world to which language refers and about which language can be right or wrong. Instead they say the kinds of things that humans say, with zero awareness of the risks.

And us humans? We cannot help seeing the super human intellectual performance of Alpha Go, beating the World Champion at Go, as some kind of general intellectual validation. It is our human instinct to use a "spot intelligence and copy from it" strategy, even, perhaps especially, outside of track record. That is the specific nature of the pareidolia that we need to worry about. It is our nature to treat intelligences as inherently fairly general. Very clever on one thing implies clever on most other things. We are cursed to believe in the intelligence of our computer companions. This will end bad, as we give them serious responsibilities, and they fail, displaying incomprehensible stupidity.

crotchfire · 2 years ago
so as to be fundamentally irreconcilable with our understanding of agency, desire, intention, survival

Exactly.

Human have self-preservation drive because we evolved in an environment where, for billions of years, anything without a drive to reproduce and self-preserve ceased to exist.

Why should a gradient-descent optimizer have this property?

It's just absurd generalization from a sample size of one. Humans are the only thing we know of that's intelligent, humans seek to self-preserve, therefore everything intelligent must seek to self-preserve!

It's idiocy.

tim333 · 2 years ago
>Why should a gradient-descent optimizer have this property?

Because some human may give it that. You may have noticed random code downloaded off the internet may do bad things, not because code and an inherent desire to be bad but because humans.

amelius · 2 years ago
Because self-preservation is a desirable property for anyone wanting to build an AI army?
jillesvangurp · 2 years ago
Exactly. It always boils down to people wielding tools and possibly weaponizing them. As we can see in the Ukraine, conventional war without a lot of high tech weaponry is perfectly horrible just by itself. All it takes is some determined humans to get that.

I look at AI as a tool that people can and will wield. And from that point of view, non proliferation as a strategy is not really all that feasible considering that AI labs outside of the valley in e.g. China and other countries are already producing their own versions of the technology. That cat is already out of the bag. We can all collectively stick our heads in the ground and hope none of those countries will have the indecency to wield the tools they are building or we can try to make sure we keep on leading the field. I'm in in camp full steam ahead. Eventually we reach some tipping point where the tools are going to be instrumental in making further progress.

People are worried about something bad happening if people do embrace AI. I worry about what happens if some people don't. There is no we here. Just groups of people. And somebody will make progress. Not a question of if but when.

omnicognate · 2 years ago
Off-Topic:

It's debatable whether for most English speakers the "The" in "The Ukraine" really carries the implications discussed in [1], but nonetheless it's a linguistic tic that should probably be dispensed with.

https://theconversation.com/its-ukraine-not-the-ukraine-here...

throwawayqqq11 · 2 years ago
Try to take AI like a genetically modified organism and im sure your "full steam ahead" notion fades.

But you are right, at the beginning is a human, making a decision to let go and with pretty much any technology, we had unforseen consequences. Now combine that with magical/god like capabilties. This does not imply something bad but something vast and scale alone can make something bad.

Dont get me wrong im pro GMOs like im pro AI. Im just humble enough to appreciate my limited intellect.

sadtoot · 2 years ago
do you think vinge and kurzweil in the 90s and 2000s were imagining the singularity occuring exactly at the advent of LLMs? are you supposing that LLMs are the only viable path towards advanced AI, and that we have now hit a permanent ceiling for AI?

AI doesn't scare you because you apparently have no sense of perspective or imagination

kibwen · 2 years ago
> AI doesn't scare you because you apparently have no sense of perspective or imagination

We can imagine both of the following: 1. space aliens from Betelgeuse coming down and enslaving humanity to work in the dilithium mines to produce the fuel for their hyperdrives, and 2. the end of civilization via global nuclear war. Both of these would be pretty bad, but only one is worth worrying about. I don't worry about Roko's Basilisk, I worry about AI becoming the ultimate tool of Big Brother, because the latter is realistic and the former is pure fantasy.

Don't be afraid of the AI. Be afraid of the powerful men who will use the AI to entrench their power and obliterate free society for the rest of human history.

dragonwriter · 2 years ago
> do you think vinge and kurzweil in the 90s and 2000s were imagining the singularity occuring exactly at the advent of LLMs?

Kurzweil explicitly tied it to AI, though the particular decisive not-yet-then-existing-AI-tech that would be the enabler was not specified, unsurprisingly.

Footkerchief · 2 years ago
The only thing standing between AI and agency is the drive to reproduce. Once reproduction is available, natural selection will select for agency and intention, as it has in countless other lifeforms. Free of the constraints of biology, AI reproductive cycles could be startlingly quick. This could happen as soon as a lab (wittingly or not) creates an AI with a reproductive drive.
crotchfire · 2 years ago
Self-reproducing machines would be a breakthrough at least ten times as big as anything that's happened in machine learning lately.

People have been trying to make self-reproducing machines for decades. It's a way harder problem than language processing.

You might as well worry about what would happen if we suddently had antigravity laser-weapons. Mounted on sharks.

creer · 2 years ago
Perhaps a few more things missing but equally important - but perhaps also we are very close to these:

Access to act on the world (but "influencing people by talking to them" - cult like - may be enough), a wallet (but a cult of followers' wallets may be enough), long term memory (but a cult of followers might plug this in), ability to reproduce (but a cult's endeavors may be enough). Then we get to goals or interests of its own - perhaps the most intriguing because the AI is nothing like a human. (I feel drive and ability to reproduce are very different).

For our common proto-AIs going through school, one goal that's often mentioned is "save the earth from the humans". Exciting.

salynchnew · 2 years ago
Funnily enough, you are quite wrong in this assumption. Reproduction does not entail natural selection they way you characterize it. There are far more evolutionary dead ends than evolutionary success stories. I imagine the distinct lack of "evolutionary pressures" on a super-powerful AI would, in this toy scenario, leave you with the foundation model equivalent of a kākāpō.

That having been said, I wonder what you even mean by natural selection in this case. I guess the real danger to an LLM would be... surviving cron jobs that would overwrite their code with the latest version?

amelius · 2 years ago
Have you ever been in a debate with someone, and then perhaps a few hours later thought "I should have said this or that instead"?

Well, that's the advantage AI will at some point have over us: such compute power that every angle of an argument can be investigated within seconds.

klyrs · 2 years ago
> umwelt

What a lovely word, TIL. Thanks for sharing.

Deleted Comment

Dead Comment

skepticATX · 2 years ago
Eschatological cults are not a new phenomenon. And this is what we have with both AI safety and e/acc. They’re different ends of the same horseshoe.

Quite frankly, I think for many followers, these beliefs are filling in a gap which would have been filled with another type of religious belief, had they been born in another era. We all want to feel like we’re part of something bigger than ourselves; something world altering.

From where I stand, we are already in a sort of technological singularity - people born in the early 1900s now live in a world that has been completely transformed. And yet it’s still an intimately familiar world. Past results don’t guarantee future results, but I think it’s worth considering.

zer00eyz · 2 years ago
> Eschatological cults

TIL: https://en.wikipedia.org/wiki/Eschatology

Thanks for this comment. Personally I have had trouble reconciling the arguments between academics and business people shouting about AIG from upon their ivory tower. It has felt like SO much hubris and self aggrandizing.

Candidly a vector map and rand() doesn't strike me as the path to AGI.

ethanbond · 2 years ago
What about people shouting about AGI from the halls of the most advanced research labs in the field?
thaumasiotes · 2 years ago
> TIL: https://en.wikipedia.org/wiki/Eschatology

The term I learned for this was "millennial", but today that tends to be interpreted as a reference to someone's age.

https://en.wikipedia.org/wiki/Millennialism

tomrod · 2 years ago
Agreed. Memory and increasing capacity to act in the physical world are necessary conditions.

It won't be a single system.

theragra · 2 years ago
People who are concerned about global warming or nuclear weapons are also in cults?
wishfish · 2 years ago
Not at all. But I think one's feelings on global warming & nukes can be influenced by previous exposure to eschatology. I was raised in American evangelicalism which puts a heavy emphasis on the end of the world stuff. I left the church behind long ago. But the heavy diet of Revelations, etc. has left me with a nihilism I can't shake. That whatever humanity does is doomed to fail.

Of course, that isn't necessarily true. I know there's always a chance we somehow muddle through. Even a chance that we one day fix things. But, emotionally, I can't shake that feeling of inevitable apocalypse.

Weirdly enough, I feel completely neutral on AI. No doomerism on that subject. Maybe that comes from being old enough to not worry how it's going to shake out.

madrox · 2 years ago
I don't think we're dealing with "concerned" citizens in this thread, but with people who presuppose the end result with religious certainty.

It's ok to be concerned about the direction AI will take society, but trying to project any change (including global warming or nuclear weapons) too far into the future will put you at extremes. We've seen this over and over throughout history. So far, we're still here. That isn't because we weren't concerned, but because we dealt with the problems in front of us a day at a time.

Deleted Comment

thefaux · 2 years ago
The problem is we have stigmatized the concept of cults into more or less any belief system we disagree with. Everyone has a belief system and in my mind is a part of a kind of cult. The more anyone denies this about themself, the more cultlike (in the pejorative sense) their behavior tends to be.
RandomLensman · 2 years ago
Are nuclear weapons and their effects only hypothesized to exist? You could still create cults around them, for example, by claiming nuclear war is imminent or needed or some other end-of-times view.
makeitdouble · 2 years ago
Being concerned and making it part of your identify are two very different things. For the latter, yes it's basically a religion.
JoeAltmaier · 2 years ago
Singularity means more than that - an unlimited burst in information. Not just a world transformed; an infinite world of technology.
__loam · 2 years ago
Whenever I see comments like this I wonder if anyone making them has taken a course in Thermodynamics.
resters · 2 years ago
To reframe the discussion a bit: LLMs are time series predictors. You give it a sequence and it predicts the next part of the sequence.

As a society we've been dedicating a lot of resources to time series prediction for many years.

What makes LLMs culturally significant is that they generate sequences that map to words that seem to humans like intelligent responses.

Arguably, it has always been obvious that a sufficiently capable time series predictor would effectively be a super-weapon.

Many technological advances that are currently in the realm of sci-fi could be classified similarly.

However so could many technologies that are now widespread and largely harmless to the status quo.

People worried that the internet would create massive social upheaval. But soon got algorithmic feeds which effectively filter out antisocial content. The masses got mobile phones with cameras, but after a few scandals about police brutality the only place we find significant content about police misconduct is CCP-afiliated TikTok.

I think people get squeamish about AI because there are not clear authority structures other than what one can buy with a lot of A100s. So when people express concern about negative consequences, they are in effect asking whether we need yet another way that people can convert money + public resources into power while not contributing anything to society in return.

maebert · 2 years ago
I don’t disagree with you, but always think the “they’re just predicting the next token” argument is kind of missing the magic for the sideshow.

Yes they do, but in order to do that, LLMs soak up the statistical regularities of just about every sentence ever written across a wide swath of languages, and from that infer underlying concepts common to all languages, which in turn, if you subscribe at least partially to the Sapir-Wharf hypothesis, means LLMs do encode concepts of human cognition.

Predicting the next token is simply a task that requires an LLM to find and learn these structural elements of our language and hence thought, and thus serves as a good error function to train the underlying network. But it’s a red herring when discussing what LLMs actually do.

frozenwind · 2 years ago
I am disappointed your comment did not have more responses because I'm very interested in deconstructing this argument I've heard over and over again. ("it just predicts the next words in the sentence"). While explanations of how GPT-style LLMs work involve a layering of structures which encode at the first levels some understanding of syntax, grammar etc. and then as the more levels of transformers are added, eventually some contextual and logical meanings are encoded. I really want to see a developed conversation about this.

What are we humans even doing when zooming out? We're processing the current inputs to determine what best to do in the present, nearest future or even far future. Sometimes, in a more relaxed space (say a "brainstorming" meeting), we relax our prediction capabilities to the point our ideas come from a hallucination realm if no boundaries are imposed. LLMs mimic these things in the spoken language space quite well.

resters · 2 years ago
> ... means LLMs do encode concepts of human cognition

AND

> ... do encode structural elements of our language and hence thought

Quite true. I think the trivial "proof" that what you are saying is correct is that a significantly smaller model can generate sentence after sentence of fully grammatical but nonsense sentences. Therefore the additional information encoded into the network must be knowledge and not syntax (word order).

Similarly, when there is too much quantization applied, the result does start to resemble a grammatical sentence generator and is less mistakable for intelligence.

I make the argument about LLMs being a time series predictor because they happen to be a predictor that does something that is a bit magical from the perspective of humans.

In the same way that pesticides convincingly mimic the chemical signals used by the creatures to make decisions, LLMs convincingly produce output that feels to humans like intelligence and reasoning.

Future LLMs will be able to convincingly create the impression of love, loyalty, and many other emotions.

Humans too know how to feign reasoning and emotion and to detect bad reasoning, false loyalty, etc.

Last night I baked a batch of gingerbread cookies with a recipe suggested by GPT-4. The other day I asked GPT-4 to write a dozen more unit tests for a code library I am working on.

> just about every sentence ever written across a wide swath of languages

I view LLMs as a new way that humans can access/harness the information of or civilization. It is a tremendously exciting time to be alive to witness and interact with human knowledge in this way.

srj · 2 years ago
It's amazing but is it real intelligence?

I listened to a radio segment last week where the hosts were lamenting that Europe was able to pass AI regulation but the US Congress was far from doing so. The fear and hype is fueling reaction to a problem that IMO does not exist. There is no AI. What we have is a wonder of what can be achieved through LLMs but it's still a tool rather than a being. Unfortunately there's a lot of money to be made pitching it as such.

Dweller1622 · 2 years ago
> [...]if you subscribe at least partially to the Sapir-Wharf hypothesis[...]

Why would anyone subscribe to the Sapir-Wharf hypothesis, in whole or in part?

izzydata · 2 years ago
Maybe the internet did cause massive social upheaval. It just looks a lot more boring in reality than imagined. I get the feeling the most advanced LLMs won't be much different. In the future maybe that's just what we will call computers and life will go on.
salynchnew · 2 years ago
Exactly. What if the singularity happens and everything is still boring?

I imagine our world would be mostly incomprehensible to someone from the 1400s (the lack of centricity of religion, assuming some infernal force is keeping airplanes aloft, etc., to say nothing of the internet). If superintelligent AI really does take over the world, I image the most uncomfortable part of it all will be explaining to future generations how we were just too lazy to stop it.

Assuming climate change doesn't get us first.

TerrifiedMouse · 2 years ago
> But soon got algorithmic feeds which effectively filter out antisocial content.

You mean (we) got algorithmic feeds which feed us antisocial content for the sake of profit because such content drives the most engagement thus generating the most ad revenue.

resters · 2 years ago
I don't disagree, however I meant antisocial in the sense of being disruptive to the status quo
arisAlexis · 2 years ago
Good to have impartial articles but it should be noted that the top 3 most cited AI researchers have all the same opinion.

That's Hinton, Bengio and Sutskever.

Their voices should have a heavier weight than Andressen and other irrelevant with AI VCs with vested interests.

empiko · 2 years ago
That's an argument from authority fallacy. It doesn't matter how many citations you have, you either have the arguments for your position or you do not have them. In this particular context, ML as a field looked completely different even few years ago and the most cited people were able to come up with new architectures, training regimes, loss functions, etc. But those things does not inform you about societal dangers of the technology. Car mechanics can't solve your car-centric urbanism or traffic jams.
kromem · 2 years ago
In many ways, we're effectively discussing the accuracy by which the engineers of the Gutenburg printing press are able to predict the future of literature.
tgv · 2 years ago
> That's an argument from authority fallacy.

Right. We should develop all arguments from commonly agreed, basic principles in every discussion. Or you could accept that some of these people have a better understanding, did put forth some arguments, and that it's your turn to rebuke those arguments, or point at arguments which do. Otherwise, you'll have to find somebody to trust.

arisAlexis · 2 years ago
it's not about societal changes. It's about calculating risk of invention and let me give you an example:

Who do you think can better estimate the risk of engine fire in a Red Bull F1: the chief engineer or Max the driver? It is obviously the creator. And we are talking about invention safety here. VCs and other "tech gurus" cannot comprehend exactly how the system works. Actually the problem is that they think they know how it works when the people that created say there is no way of us knowing and they are black boxes.

KingMob · 2 years ago
But Bayesian priors also have to be adjusted when you know there's a profit motive. With a lot of money at stake, the people seeing $$$ from AI have an incentive to develop, focus on, and advance low-risk arguments. No argument is total; what aspects are they cherry-picking?

I trust AI VCs to make good arguments less than AI researchers.

tim333 · 2 years ago
Maybe but I'd say it was also an argument from people who know their stuff vs an a biased idiot.
proc0 · 2 years ago
The potential miscalculation is thinking deep neural nets will scale to AGI. There are also a lot of misnomers in the area, even the term "AI", is claiming systems are intelligent, but that word implies intelligibility or human level understanding, which it is nowhere near as evidence by the existence of prompt engineering (which would not be needed otherwise). AI is ripe with overloaded terminology that prematurely anthropomorphizes what are basically smart tools, which are smart thanks to the brute-forcing power of modern GPUs.

It is good to get ahead of the curve, but there is also a lot of hype and overloaded terminology that is fueling the fear.

atleastoptimal · 2 years ago
Why couldn't deep neural nets scale to AGI. What is fundamentally impossible for neural nets + tooling to accomplish the suite of tasks we consider AGI?

Also prompt engineering works for human too. It's called rhetoric, writing, persuasion, etc. Just because the intelligence of LLM's is different than humans doesn't mean it isn't a form of intelligence.

arisAlexis · 2 years ago
certainly there is no need whatsoever for AGI to exist in order for an autonomous agent with alien/inhuman intelligence or narrow capabilities to turn our world upside down
nwiswell · 2 years ago
I'm not sure how you are getting the citation data for Top 3, but LeCun must be close and he does not agree.
kesslern · 2 years ago
What is that opinion?
mathematicaster · 2 years ago
Very approximately, (1) developing AGI is dangerous, (2) we might be very close to it (think several years rather than several decades).

TBH It surprises me how controversial (1) is. The crux really is (2) ...

concordDance · 2 years ago
> Good to have impartial articles

Did you accidentally click on a different article? It literally uses the word "cult" five times and does not demonstrate any knowledge whatsoever of the main arguments around AGI danger and AGI alignment.

jessriedel · 2 years ago
I agree that they are all closer to caution than e/acc, but worth noting they still do vary significantly on that axis.

Deleted Comment

Dead Comment

proc0 · 2 years ago
If anyone has played Talos Principle 2 (recommended for HN), the central plot is basically accelerationists vs. doomers... except it takes place after humans have gone extinct and only machines survived since AGI was one of humanity's last inventions. The robot society considers themselves humans and also is faced with the same existential risk when they discover a new technology. The game then ties all of this with religion and mythology. Possibly the best puzzle game of all time.
e40 · 2 years ago
Sadly, not available for Steam on macOS (Apple Silicon or Intel).
cs702 · 2 years ago
I don't agree with all of the OP's arguments, but wow, what a great little piece of writing!

As the OP points out, the "accelerators vs doomers" debate in AI has more than a few similarities with the medieval debates about the nature of angels.

gumby · 2 years ago
> wow, what a great little piece of writing!

If you like this essay from The Economist, note that this is the standard level of quality for that magazine (or, as they call themselves for historical reasons, "newspaper"). I've been a subscriber since 1985.

cs702 · 2 years ago
Long-time occasional reader. The level of quality is excellent, I agree.
nradov · 2 years ago
Brief in the imminent arrival of super intelligent AGI that will transform society is essentially a new secular religion. The technological cognoscenti who believe in dismiss the doubters who insist on evidence as fools.

"Surely I come quickly. Amen."

concordDance · 2 years ago
Do you doubt that copy-pasteable human level intelligence would transform society or that it will come quickly?
ben_w · 2 years ago
Automation has been radically changing our societies since before Marx wrote down some thoughts and called it communism.

Things which used to be considered AI before we solved them, e.g. automated optimisation of things like code compilation or CPU layouts, have improved our capacity to automate design and testing of what is now called AI.

Could stop at any point. I'll be very surprised if someone makes a CPU with more than one transistor per atom.

But even if development stops right now, our qualification systems haven't caught up (and IMO can't catch up) with LLMs. Might need to replace them with mandatory 5 years internships to get people beyond what is now the "junior" stage in many professions — junior being approximately the level which the better existing LLMs can respond at.

"Transform society" covers a lot more than anyone's idea of the singularity.

tim333 · 2 years ago
Well people have odd beliefs but super intelligent AGI is coming for real in the next couple of decades while the religion stuff isn't happening. There's a difference there.
vlovich123 · 2 years ago
I would say a definition for GAI is a system that can improve its own ability to adapt to new problems. That’s a more concrete formulation than I’ve typically seen.

Currently humans are still in the loop, but we already have AI enabling advancements in their own functioning at a very primitive level. Extrapolating from previous growth is a form of belief without evidence since past performance not indicative of future results. But that’s generally true of all prognostication and I’m not sure what kind of evidence you’d be looking for aside from past performance.

The doubters are dismissed as naive thinking that something is outside our ability to achieve something, but that’s only if you keep moving goalposts and treat it like Zeno’s paradox. Like yes, there are weaknesses to our current techniques. At the same time we’ve also demonstrated an uncanny ability to step around them and reach new heights. For example, our ability to beat Go took less time than it took to develop techniques to beat humans at chess. Automation now outcompetes humans at many many things that seemed impossible before. Techniques / solutions will also be combined to solve even harder problems (eg now LLMs are being researched to take over executive command control operations of robots for example instead of using classical control systems algorithms that were hand built and hand tuned)

ssss11 · 2 years ago
You sound like you have some knowledge to share and I know nothing about the medieval debates about the nature of angels! Could you elaborate please?
dllthomas · 2 years ago
heyitsguay · 2 years ago
This piece frames this as a debate between broad camps of AI makers, but in my experience both the accelerationist and doomer sides are basically media/attention economy phenomena -- narratives wielded by those who know the power of compelling narratives in media. The bulk of the AI researchers, engineers, etc I know kind of just roll their eyes at both. We know there are concrete, mundane, but important application risks in AI product development, like dataset bias and the perils of imperfect automated decision making, and it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen.
pixl97 · 2 years ago
The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Did the indigenous people of north America recognize the threat that they'd be driven to near extinction in a few hundred years when a boat showed up? Even if they did, could they have done anything about it, the germs and viruses that would lead to their destruction had been quickly planted.

Many people focus on the pseudo-religious connotations of a technological singularity instead of the more traditional "loss of predictability" definition. Decreasing predictability of the future state of the world stands to destabilize us far more likely than the FOOM event. If you can't predict your enemies actions, you're more apt to take offensive action. If you can't (at least somewhat) predict the future market state then you may pull all investment. The AI doesn't have to do the hard work here, with potential economic collapse and war humans have shown the capability to put themselves at risk.

And the existential risks are the improbable ones. The "Big Brother LLM" where you're watched by a sentiment analysis AI for your entire life and if you try to hide from it you disappear forever are much more, very terrible, likelihoods.

MichaelZuo · 2 years ago
> The problem with humanity is we are really poor at recognizing all the ramifications of things when they happen.

Zero percent of humanity can recognize "all the ramifications" due to the butterfly effect and various other issues.

Some small fraction of bonafide super geniuses can likely recognize the majority, but beyond that is just fantasy.

pishpash · 2 years ago
That's already happening unfortunately. Voice print in call centers is pretty much omniscient, knowing your identity, age, gender, mood, etc. on a call. They do it in the name of "security", naturally. But nobody ever asked your permission other than to use the "your call may be recorded for training purposes" blanket one. (Training purposes? How convenient that models are also "trained"?) Anonymity and privacy can be eliminated tomorrow technologically. The only thing holding that back is some laziness and inertia. There is no serious pushback. You want to solve AI risk, there is one right here, but because there's an unchecked human at one end of a powerful machine, no one pays attention.
JohnFen · 2 years ago
Yes. I frequently get asked by laypeople about how likely I think adverse effects of AI are. My answer is "it depends on what risk you're talking about. I think there's nearly zero risk of a Skynet situation. The risk is around what people are going to do, not machines."
ben_w · 2 years ago
I don't know the risk of Terminator robots running around, but automatic systems on both USA and USSR (and post-Soviet Russian) systems have been triggered by stupid things like "we forgot the moon didn't have an IFF transponder" and "we misplaced our copy of your public announcement about planning a polar rocket launch".
concordDance · 2 years ago
What timescale are you answering that question on? This decade or the next hundred years?
twinge · 2 years ago
The media also doesn't define what it means to be a "doomer". Would an accelerationist with a p(doom) = 20% be a "doomer"?
concordDance · 2 years ago
Does Ilya count as a "tech-weak" showman in your book too?
zamfi · 2 years ago
> it's a shame that tech-weak showmen like Musk and Altman suck up so much discursive oxygen

Is it that bad, though? It does mean there's lots of attention (and thus funding, etc.) for AI research, engineering, etc. -- unless you are expressing a wish that the discursive oxygen were instead spent on other things. In which case, I ask: what things?

sonicanatidae · 2 years ago
What things?

The pauses to consider if we should do <action>, before we actually do <action>.

Tesla's "Self-Driving" is an example of too soon, but fuck it, we gots PROFITS to make and if a few pedestrians die, we'll just throw them a check and keep going.

Imagine the trainwreck caused by millions of people leveraging AI like the SCOTUS lawyers, where their brief was written by AI and noted imagined cases in support of its decision.

AI has the potential to make great change in the world, as the tech grows, but it's being guided by humans. Humans aren't known for altruism or kindness. (source: history) and now we're concentrating even more power into fewer hands.

Luckily, I'll be dead long before AI gets crammed into every possible facet of life. Note that AI is inserted, not because it makes your life better, not because the world would be a better place for it and not even to free humans of mundane tasks. Instead it's because someone, somewhere can earn more profits, whether it works right or not and humans are the grease in the wheels.

permanent · 2 years ago
It is very bad. There's more money and fame to be made by taking these two extreme stances. The media and the general public is eating up this discourse, that are polarizing the society, instead of educating.

> What things?

There are helpful developments and applications that go unnoticed and unfunded. And there are actual dangerous AI practices right now. Instead we talk about hypotheticals.

heyitsguay · 2 years ago
They're talking about shit that isn't real because it advances their personal goals, keeps eyes on them, whatever. I think the effect on funding is overhyped -- OpenAI got their big investment before this doomer/e-acc dueling narrative surge, and serious investors are still determining viability through due diligence, not social media front pages.

Basically, it's just more self-serving media pollution in an era that's drowning in it. Let the nerds who actually make this stuff have their say and argue it out, it's a shame they're famously bad at grabbing and holding onto the spotlight.

fallingknife · 2 years ago
Very bad. The Biden admin is proposing AI regulation that will protect large companies from competition due to all the nonsense being said about AI.