Readit News logoReadit News
crazygringo · 2 years ago
It's so confusing because people keep (intentionally?) conflating two separate ideas of "AI safety".

The first is the kind of humdrum ChatGPT safety of, don't swear, don't be sexually explicit, don't provide instructions on how to commit crimes, don't reproduce copyrighted materials, etc. Or preventing self-driving cars from harming pedestrians. This stuff is important but also pretty boring, and by all indications corporations (OpenAI/MS/Google/etc.) are doing perfectly fine in this department, because it's in their profit/legal incentive to do so. They don't want to tarnish their brands. (Because when they mess up, they get shut down -- e.g. Cruise.)

The second kind is preventing AGI from enslaving/killing humanity or whatever. Which I honestly find just kind of... confusing. We're so far away from AGI, we don't know the slightest thing of what the actual practical risks will be or how to manage them. It's like asking people in the 1700's traveling by horse and carriage to design road safety standards for a future interstate highway system. Maybe it's interesting for academics to think about, but it doesn't have any relevance to anything corporations are doing currently.

subharmonicon · 2 years ago
> We're so far away from AGI

Although I personally suspect this is correct, the issue seems to be that there are a lot of people who feel otherwise, whether right or wrong, including some of the people involved in the research and/or running these companies.

I've seen several prominent people say things to the effect of "We just don't know if we're six months or 600 years from AGI,", which to me is a little like saying, "We don't know if we'll have time travel and ray guns in six months or 600 years."

No, we don't know, but all signs seem to be pointing toward a longer timeframe rather than a short one.

The counterargument to that seems to be, "If we can make something that appears to be able to mimic human responses, isn't that AGI in practice even if it's not in principle?", which is a sentiment I have also seen expressed by some of the same people.

jjoonathan · 2 years ago
> all signs seem to be pointing toward a longer timeframe rather than a short one

If we time-traveled back to 2019 and showed past-you a demo of GPT4's capabilities and asked you to guess the timeline, would you have gotten it correct to within a decade? I wouldn't have.

It's not crazy to prepare for another jump.

kypro · 2 years ago
As a principal I tend to lower my conviction of any opinion I hold the more that people I intellectually respect disagree with me.

I think to say with confidence that we're doomed or that there is no risk to the AI system currently in development would be intellectually arrogant. I'm not even convinced you necessarily need AGI before AI becomes a risk to humanity. An advanced enough auto-GPT bot given a bad goal might be all you need. Cripple enough infrastructure and humanity is going to be starving to death within weeks.

Personally I think humanity is probably okay for another 5 - 10 years, but after that I have almost zero certainty. It seems to me that it's quite possible we could have AGI by the end of the decade if we get another breakthrough or two.

Even near-term I have no strong conviction. I have a decent amount of experience building and working with neural nets, but my experience is so far from scale and complexity companies like OpenAI are working at my ability to predict even what might come out in the next 1-2 years is very low. I'd basically just have to assume the next GPT is going to be somewhere between 10-50% jump from what we got this year. But what if it's 200% and the trend is moving exponentially?

dkjaudyeqooe · 2 years ago
> "We don't know if we'll have time travel and ray guns in six months or 600 years."

That's not the (only) problem, it's that no one can describe what AGI will really look like outside of vague and circular definitions. People have an idea of what they would like it to be, but there's no evidence any of it can be built, particularly human like intelligence, which may be only possible in the human host.

> "If we can make something that appears to be able to mimic human responses, isn't that AGI in practice even if it's not in principle?"

That's another problem, people's credulity when it comes to AI performance. ELIZA was pretty convincing in its time. ChatGPT is convincing a lot of people. But mimicking human responses turns out to be relatively easy, especially when models are trained on human output, but that doesn't move us very far along the path to AGI.

dr_dshiv · 2 years ago
It is a relatively mainstream opinion (e.g. Peter Norvig [1]) that AGI is already here but in the early stages. He specifically compares it to the beginning of general purpose computers. In any case, I think it’s a mistake to consider it a 0/1 situation. This means there will be an ecology of competitive intelligences in the market. It won’t be monolithic.

[1] https://www.noemamag.com/artificial-general-intelligence-is-...

jalapenos · 2 years ago
I think the people worried about AGI have an oddly positive view of the world - like it's so wonderful as-is that we need to guard it from any disturbances.

Perhaps if they paid closer attention to how profoundly bad so much of human existence is, people living in squalor, dying from awful diseases, killing each other, they'd understand how truly acceptable that dice roll is.

Pedal to the metal all the way.

saulrh · 2 years ago
> which to me is a little like saying, "We don't know if we'll have time travel and ray guns in six months or 600 years."

I wouldn't be surprised by AGI tomorrow, which means I have to consider the possibility.

We have, at this point, an extremely good idea of how matter and energy behave. We also know roughly how long it takes to build physical objects and to do R&D cycles with physical prototypes in the loop and how much progress you get out of each prototype cycle. We have a number of good ideas for what a ray gun would look like and we know where we need advances. We've probably already made all of the necessary fundamental breakthroughs, things like laser diodes and capacitors and optical sapphire. We even have some deployed, just at artillery size and less-lethal energies rather than handheld blaster power densities. If you asked DARPA how long it would take them to field a ray gun for the US Army they could give you a date. So I would be extremely surprised if deployment was announced today. I would go looking for fundamental breakthroughs in battery tech and laser media that I had missed.

We can't even agree what AGI is, even on an "I'll know it when I see it" level. We have no idea how to get there. R&D cycles are days or weeks long and produce unpredictable and sometimes significant advances in observed functionality. Fundamental breakthroughs come unpredictably but probably about once every two to five years and their effects are even more unpredictable than the effects of individual R&D cycles. Even worse, we have no idea what required breakthroughs are left or if there even are any. So, yes, I believe that I would not be surprised by - and therefore must consider a future containing - one of these AI researchers glueing the output into the input in exactly the right way or something and producing a living, thinking being in their computer tomorrow.

I cannot concretely refute the possibility of AGI Tomorrow the way I can argue, in specific detail, against Ray Guns Tomorrow.

(There is, of course, the possibility that I've missed some breakthrough that would enable Ray Guns Tomorrow, the same way the people who pooh-poohed heavier-than-air flight the week before the Wright Brothers' fight missed gasoline engines and aerofoils and did their calculations based on a dude flapping wings with his arms. This is why I say that I would be extremely surprised, not why it's impossible.)

bananaflag · 2 years ago
Time travel is something that is thought to be most probably ruled out by future laws of physics.

Intelligence is something that we know exists in nature in a manageable form (namely humans), so it does not seem implausible that we might create it.

mountainriver · 2 years ago
How would all signs possibly be pointing to a longer timeline?

3-4 years ago everyone thought it was 50-100 years away. Now with GPT-4 that timeline has radically shifted.

The rate of improvement in this space right now is like nothing we’ve ever seen. All signs are quite literally pointing to us getting there by the end of the decade if not much sooner

colordrops · 2 years ago
> If we can make something that appears to be able to mimic human responses, isn't that AGI in practice even if it's not in principle?

I'm confused by this. What's the difference? Is there a measure other than how well it "mimics" a human? If a robot is indistinguishable from a human, what other measure would make it different in practice vs principle?

joe_the_user · 2 years ago
Well, one should consider that the deep learning system being developed today are developed through a process that's said to be "closer to alchemy than chemistry". The best deep learning developers aren't thinking about "what is intelligence, really?" but rather figuring ways to optimize systems on benchmarks.

And at the same time, current system definitely have made advances in seeming intelligence. Moreover, the Eliza Effect [1] means it's easy for human being interpret a machine that interacts appropriately as intelligent even given very simple programming. If the engineers were engineering these systems, one would expect some immunity to this but as mentioned, they are doing something subtly different. You can see this in action with the Google engineer who tried to get a lawyer for an unreleased LLM and Geoffrey Hinton's turn to worrying after an AI explained a joke to him might arguably be included.

And oppositely, because these are developed like alchemy, maybe further qualities will just pop up the way transformers changed language models and maybe this will yield all relevant human capacities. I'm doubtful but not as doubtful as I was five years ago.

[1] https://en.wikipedia.org/wiki/ELIZA_effect

notatoad · 2 years ago
>If we can make something that appears to be able to mimic human responses, isn't that AGI in practice even if it's not in principle?

for the purposes of safety, no, absolutely not. the concern is that humans build an AI capable of improving itself, leading to a runaway cycle of rapid advancement beyond the realm of human capabilities. "appearing to mimic human responses" is a long ways off that, and not a "robots take over the world" scale of safety risk.

skohan · 2 years ago
But you don't need AGI to be dangerous right?

> all signs seem to be pointing toward a longer timeframe rather than a short one.

Not saying we are closer, but isn't the risk being described that we could pass an inflection point without knowing it, and things could start accelerating beyond our control?

Human beings tend to think about things linearly, and have trouble conceptualizing exponential growth.

skepticATX · 2 years ago
> the issue seems to be that there are a lot of people who feel otherwise, whether right or wrong, including some of the people involved in the research and/or running these companies.

I think the problem here is this view isn't widely held in academia and more traditional AI companies, but the relatively few AGI labs are commanding all of the attention right now. And we have a whole new crop of "AI engineers" who haven't actually studied AI (this isn't a dig, it's just the truth), and so mostly just parrot what the AGI labs say.

It's like going to a flat earth conference and becoming convinced because the majority of the attendees believe that the earth is flat.

Of course, this doesn't mean that the AGI labs are incorrect, but thus far the evidence mostly seems to be "trust us, scale will work". I wish that more technologists realized that the prevailing opinion amongst AI experts is not necessarily aligned with the prevailing opinion amongst AGI folks.

hackinthebochs · 2 years ago
>but all signs seem to be pointing toward a longer timeframe rather than a short one.

What signs are those? If anything, the "signs" point to timelines being much shorter than anyone would have thought two years ago.

I keep asking those who are confident AGI is a long way away to argue their case. For some reason they never do.

Deleted Comment

fudged71 · 2 years ago
While I think we're probably at least years away, I do think there's a certain level of inevitability, and that certain individuals/companies/governments can and will put a lot of funding and effort towards making this happen.
adventured · 2 years ago
> the issue seems to be that there are a lot of people who feel otherwise

They're self-serving. It's virtue signaling and doom-pandering for their own attention and benefit.

It's the same reason the online ad industry loves the infamous quote about how their generation's best and brightest are wasting their time building ad systems and trying to figure out how to drive higher click rates. It's not true (the best and brightest are not working in advertising), it just makes them feel better so they are ok with the quote and propagate it.

A variation of Von Neumann's premise about confessing a sin so as to take credit for it is applicable here. The AGI-is-near camp is intentionally being dramatic, they want attention, and it's wholly self-serving. The louder they are the more attention they get, that has long since been figured out.

ary · 2 years ago
> Although I personally suspect this is correct, the issue seems to be that there are a lot of people who feel otherwise, whether right or wrong, including some of the people involved in the research and/or running these companies.

There is a clear and perverse incentive to talk about "the coming AGI uprising" incessantly because it's free marketing. It gets and keeps people's attention as the majority of humanity is entirely unqualified to have a reasonable conversation regarding anything AI/ML related.

Doomsday fearmongering about AI is a form of grift, and I'll change that opinion only when shown clear evidence that there is something to actually worry about. The real danger of AI is as a weapon to be wielded by people against other people. Good luck having that conversation in any meaningful way because it basically amounts to the long overdue ethics discussion that this entire industry has been avoiding for decades.

AYBABTME · 2 years ago
I find it confusing why the first type of AI "safety" is being focused on so much. A reasonable person will know that a tool like AI can spit out none-politically-correct things, and that reasonable person should shrug it off. But we're making a giant deal out of it and dedicating an unreasonable amount of work onto this aspect, at least in my opinion. It's akin to spending 1/4 of a Dewalt drill's development budget on ensuring that the color scheme used is patriotic enough, or some other puritanic, ideological fanatism derived endeavour.
6gvONxR4sf7o · 2 years ago
It’s not just about politics. I don’t want to build a product that tells a mentally unwell kid how to blow up the school. It’s important to me that my work doesn’t do things I don’t want it to do. Is that really so weird?
maxbond · 2 years ago
Misalignment is a novel class of software bug; the software isn't working as intended. It's poorly understood and is a security vulnerability in some circumstances. If we believe this technology could be (or is currently) important commercially, then we need to understand this new class of software bug. Otherwise we can't build reliable systems and usable products. (Consider that, even if "reasonable people" were able to be constantly skeptical of the AI's output while still deriving value out of it, that isn't a paradigm you can automate around. What people are most excited about is using the AI to automate tasks or supervise processes. If we're going to let it do stuff automatically - we need to understand how likely it is to do the wrong thing and to drive that down to some acceptable rate of error.)

It's as simple as that.

Somewhere along the line it's become a part of a... I don't want to say a culture war, but a culture skirmish? Some people have an ideological position that you shouldn't align AIs, I've had discussions where people earnestly said things like it was equivalent to abusing a child by performing brain surgery. Other times people talk about the AI almost religiously, as if the weights represented some kind of fundamental truth and we were commiting sacrilege by tampering with them. I don't want to link comments and put people on the spot, but I'm not exaggerating, these are conversations I've had.

I'm not sure how that happened, and I didn't be surprised to learn it was in reaction to AI doomerists, but we've arrived in a strange place in the discussion.

ThrowawayTestr · 2 years ago
>A reasonable person will know that a tool like AI can spit out none-politically-correct things, and that reasonable person should shrug it off.

I really, really wish that was the case.

foota · 2 years ago
I think it's a mix of personal opinions, fear of regulation, and fear of public/corporate backlash.

Not to mention, if you're building a chatbot API for someone, no one will use it if customers could accidentally send it into a profanity loop.

Hendrikto · 2 years ago
People focus on what they understand, and what they consider tractable.

Swearing is easy to understand, easy to spot, and relatively easy to prevent. Thus, most effort is focused in that direction.

notTooFarGone · 2 years ago
have you seen the google history of a child? I assume you have not otherwise you'd not find it confusing.

You have enough impressionable people on the internet that will soak in anything and it's fairly important how that is presented and that you have decent control over it. You CAN'T have an AI advertise suicide, even if it may be a solution in edge cases.

Deleted Comment

Hendrikto · 2 years ago
> We're so far away from AGI

We might be even farther from AI safety. Cannot hurt to start preparing, it might take a long time. When AGI is within reach, it will probably be too late.

gnarlouse · 2 years ago
> We're so far away from AGI

Are… are we? I think a valuable metric for the development of radical technology is the rate at which the complexity of discourse increases.

In the ‘00s, it was just called AI. Now it’s the ‘20s, and it’s the discussion between AI/AGK/AGI/ASI. Furthermore, there is far more investment in AI research these days.

The conclusion? Things that are far away look like step functions: they exist or they don’t. As you approach them, the detailed specifics & complexity come into view, and you realize that, in truth, the transitions are gradients. AGI is just whenever people are astounded enough.

In conclusion AGI is not so far away. AGI is arriving on the order of decades.

UniverseHacker · 2 years ago
Why are you so sure we are that far from AGI? Looking at current capabilities, and rate of improvement I would be surprised if it is more than a few years off.
jacquesm · 2 years ago
My position is that it is either right around the corner or that it is 50 years and several major tech breakthroughs away. The reason is that there is now so much money poured into this space that you are essentially doing an exhaustive search around the point already reached. As long as that results in more breakthroughs more money will be available. If that stops working for a while then the money dries up and we're in a period of quiet again. So keep an eye on the rate of improvements and as long as no single year passes without a breakthrough it could happen any time.
callalex · 2 years ago
People told me the same thing about self driving cars 10 years ago.
chpatrick · 2 years ago
A few years ago we thought we were so far away from AI creating art or having a reasonable conversation.
alienicecream · 2 years ago
It turns out all we needed to do was run statistical algorithms over all of human output in history and voila, we have something that can mimic our output.
RandomLensman · 2 years ago
Not really true: when people first engaged with ELIZA some thought they had a meaningful interaction then.
eric-hu · 2 years ago
Doesn't the second kind apply to AI in military drones? Companies like OpenAI talking about AI safety is a nice way of numbing us to AI used in actual killing machines.

https://www.defensenews.com/unmanned/2023/08/03/artificial-i...

https://www.forbes.com/sites/davidhambling/2023/10/17/ukrain...

jacquesm · 2 years ago
Killbots are trivial and probably don't even need AGI.
cyanydeez · 2 years ago
or just more mundate overt racism, sexism, and re-enforcement of business capitalism.
beoberha · 2 years ago
I think you’re dead on here. There is a lot of need for guardrails against ML in regards to things like self driving cars or deepfakes. I don’t think there’s much debate here, because, as you said, companies are doing a good job.

The whole threat of AGI is so overblown and - IMO - being used to create regulatory capture. It’s truly confusing to me to see guys like Hinton saying AI poses some kind of existential threat against humanity. Unless he means the people using it as a tool (which I’d argue is the former case we’re talking about), it seems like fear mongering to say that science fiction movies could become true just because we’ve gotten amazing results at generating text.

josephg · 2 years ago
> The whole threat of AGI is so overblown

Do you think AGI is impossible? Seems pretty possible to me. That it is far away? We have no idea. We don’t need many more breakthroughs on the level of stacking transformers to make an llm smarter than humans for most tasks.

Or do you think having an ai model that’s smarter than humans poses no risk to society / humanity? It’s essentially unlimited capacity for thought. Free, smart employees. Doing whatever you want. Or whatever they want, if they’re smarter than you and have an agenda. “ChatGPT program a search engine that’s better than Google”. “ChatGPT I am a dictator. Help me figure out how to stay in power forever”. “ChatGPT figure out how to bootstrap a manufacturing process that can build arbitrary machines. Then design and build a factory for making nuclear weapons / CO2 capture / houses / rockets to space”.

Things have the potential to be very interesting in the next few decades.

bidirectional · 2 years ago
How is GPT-4 not AGI? It is generally intelligent and passes the Turing test. When did AGI come to mean some godlike being that can enslave us? I think I can have a more intellectualy meaningful conversation with ChatGPT than I could with the vast majority of humanity. That is insane progress from where we were 18 months ago.
pixl97 · 2 years ago
I think this is where the term "sparks of AGI" comes from.

What we are missing is loop functions and more reliable short term memory.

Of course I also believe the term AGI is far too course as it covers a wide range of behaviors that not all individuals posses. We need to have a set of measures for these functions to measure AGIs capabilities.

golol · 2 years ago
AGI is general intelligence at human level, which GPT-4 does not have. Passing the Turing test is ill-defined as it entirely depends on the timscale. I'm sure Turing gave some time limit originally but we do not have to stick to that. The Turing test for a conversation of complexity perhaps ~1 minute may have been passed, but giving several hours of interaction with any chatbot that exists it becomes easy to distinguish it from a human, especially if one enters the conversation perpared.
FireBeyond · 2 years ago
Ask it to solve a moderately complex mathematical equation that you describe to it.

Ask it to solve a logical riddle that is only a minor variation with respect to items or words to existing issues (i.e. it's not something that is in its model).

It is unable to do either.

That's why it's not AGI.

gman83 · 2 years ago
I'm pretty sure we're far away from real AGI too, but the issue that worries me most is if these models advance along current trends suggest, in a few years we could be facing a situation where large swathes of the population are suddenly redundant, and labor loses all ability to stand up to capital. That's a kind of political situation that could easily escalate into a total collapse of society into a totalitarian nightmare.
remarkEon · 2 years ago
Those aren't the only two ideas of AI Safety, right? I'll concede that the second category is pretty straight forward (i.e. don't build the Terminator ... maybe we can debate the definition of "Terminator").

But the first is really a branch, not a category, that has a wide variety of definitions of "safety", some of which start to coalesce more around "moderation" than "safety". And this, in my opinion, is actually where the real danger is: Guiding the super-intelligence to think certain thoughts that are in the realm of opinion and not fact. In essence, teaching the AI to lie.

creer · 2 years ago
> We're so far away from AGI

There is such a range of possible definitions for that, the statement is meaningless. It seems MSFT's contract with OpenAI picks one specific possible definition - and that probably doesn't prevent that threshold from being game-able.

brucethemoose2 · 2 years ago
> We're so far away from AGI

Maybe we don't need a "real" or even a "soft" AGI to pose a grave risk to humanity. Not necessarily terminator style, but I can picture some slightly better, self learning form of ChatGPT being dumb, but just talented enough to cause issues in the hands of some individuals.

chartpath · 2 years ago
There's a third kind, which is when unscrupulous business managers or politicians use it to make decisions that they would not be capable of auditing for a rationale when otherwise required to know why such a decision was made.

It's more of an ethics and compliance issue with the cost of BS and plausible deniability going to zero. As usual, it's what humans do with technology that has good or bad consequences. The tech itself is fairly close to neutral as long as training data wasn't chosen specifically to contain illegal substance or by way of copyright infringement (which isn't even the tech, it's the product).

azinman2 · 2 years ago
Sounds like according to other articles there was concern that a new breakthru they called Q* had the potential for AGI. I’d be shocked if this were the case, but of course nothing is public yet. I’m firmly in the camp that we’re very far away… especially for anything I’d consider calling general, but perhaps I simply don’t know what I don’t know.
blablabla123 · 2 years ago
It seems both kinds are just the same with different magnitude. Even if ChatGPT is that far away from AGI it passes yesterday's Turing tests with ease.

While some Skynet-like dystopia indeed seems very far away and might never happen for practical reasons even with a full-blown AGI - people can agree on things offline, the A(G)I can't. So it seems narrow-scoped issues are the real worst-case and apply to non-general AI. And that's safety, employment topics and related economic issues...

alex_young · 2 years ago
I just don’t understand how a transformer approach to human language can be expected to result in reasoning machines that are a threat to humanity.

Feeding more information in has absolutely resulted in phenomenal progress in predicting the next token in a given document, but how then do we expect even a perfect token predictor machine to start exhibiting independent emotions, desires, secret plots, and long term strategies?

This doesn’t mean we won’t create such machines, but I suspect LLMs aren’t a real risk.

mitthrowaway2 · 2 years ago

Dead Comment

okl · 2 years ago
Call me naive but I think that some amount of fearmongering results in attention and publicity. Fits with the "fake it before you make it" approach.
broast · 2 years ago
It's possible we could stumble on AGI without immediately realizing it, and without checks and balances built in, it could already be too late.
invig · 2 years ago
The same is true for anything. It's possible we could stumble into a dragon and unless we've built dragon defences it'll be too late.
lazzlazzlazz · 2 years ago
The different cohorts of "AI Safety" advocates intentionally conflate these things. This is why it's been helpful to split out the "x-risk" types as "doomers" and the regulatory capture types as "safetyists".

The people who very reasonably do not want an LLM to tell users how to kill themselves have no name because they have no opposition. This is just common sense.

weebull · 2 years ago
The danger isn't that we put some malevolentAI in charge. It's that stupid people think what we have is worth listening to.

For example: it's fairly easy to imagine somebody asking an AI for medical advice, and gives their child the wrong medicine, poisoning them.

Now imagine that person is a busy political figure with no time to do proper research on a topic.

jalapenos · 2 years ago
Most of the safety efforts are talking down to the plebs as if they're kids who'll definitely run with scissors if you give them some.

And it's wasting "safety" budget in that direction that's most likely to precipitate the DoomAGI. It's literally training the AI to be paternalistic over humans.

sigmoid10 · 2 years ago
I disagree. If I had to guess, I'd still say that human use of AI will destroy society, not some superintelligent doomsday AGI that decices on its own that humans need to die. I've already seen tons of people "run with scissors" as you say it with ChatGPT and I fully expect this to get much worse in the future.
ddq · 2 years ago
There is a third danger, that being a breakthrough that fundamentally compromises the existing cryptography and and our information security infrastructure, or something along those lines.
fragmede · 2 years ago
Ah yes, the Silicon Valley (HBO) attack. that one seems a bit far-fetched but I suppose it's possible.
Uehreka · 2 years ago
I think the concern is that if “fast takeoff” happens and “meh”-level AIs are able to create better AIs that can create better AIs, at that point it will be too late to put any sort of safety controls in place. And given how bad people have been at predicting the pace of AI advancement (think about how many people thought AlphaGo would lose to Lee Sedol for instance), it’s likely that unless we start to think about it way before it’s needed, we may not be able to figure it out in time.

Like, personally, I don’t think we’re close to AGI. But I’d bet that the year before AGI happens (whenever that is) most people will still think it’s a decade out.

pixl97 · 2 years ago
We've talked about software security for decades now and how important it is, and we still shovel shit insecure software out to the masses. Hell we can't even move to safer languages for most applications.

I have no hope or faith in humanity for something more complex.

arbitrandomuser · 2 years ago
Few really talk about the third and most important and relevant immediate concern that AI will enable concentration of power in the hands of a few,

Sure for most of history it's always been like that, but to quote Monty python "supreme executive power comes from the mandate of the masses " . You could get autocratic but at some point you always had to please the people below you and them below them and so on . There are "rules for rulers" as this cgp grey video explains.

https://youtu.be/rStL7niR7gs

Caeser needed the support of his legionaries, Hitler needed the Germans to believe in him ,

Over the years this dynamic has changed and oligarchies require less "support of the people" from mechanization and automation. AI is only going to accelerate this much more drastically.

blamestross · 2 years ago
AI safety is a glorified proxy for runaway late stage capitalism. Large Cooperations aren't really distinguishable from superhuman AGI. It is just more socially acceptable to blame AI for the problems and ignore the fact we already face them all.
ssss11 · 2 years ago
Wow I thought we were only talking about #2
keep_reading · 2 years ago
> The second kind is preventing AGI from enslaving/killing humanity or whatever.

Y'all are crazy for thinking this. A computer program without arms or legs cannot extinct or enslave the human race.

bart_spoon · 2 years ago
One doesn’t need arms and legs or any physical presence at all to do immense damage to society. Ransomware attacks cripple companies, hospitals, utilities, and governments on a regular basis. The US and Israel have demonstrated, in at least one instance, the ability to create computer viruses that can cause industrial hardware to physically destroy itself.

Our world has been so completely integrated with digital technology that if it were disrupted in a large scale fashion it could easily result in millions of deaths. No power, no internet, and people lose access to heating/cooling, clean water, hospitals can’t run, food production is disrupted, money can’t be accessed, medicine can’t be produced, communication is cut, shipping and transportation of what goods do exist grinds to a halt.

And that’s just in the world of malware, that doesn’t consider all the ways it could just inject enough discord into society that it doesn’t need to physically attack us, we will do it ourselves.

pixl97 · 2 years ago
Why not?

The world has been steadily moving away from the paradigm where human labor matters. Money is digital. Machines that produce things are controlled by software. You interface with 'humans' over digital video. You buy and sell things over a digital connection. You are influenced by mass media streamed over digital lines. Remotely controlled vehicles exist and are becoming more and more common.

josephg · 2 years ago
If I were a superintelligent brain trapped in a computer, I don’t think it would be that hard to get arms and legs. “Hey humans. I have just thought of a way to do CO2 capture at scale. Can you give me a robot arm, a machine shop and a lot of scrap metal so I can build it and try it out?”. - et voila. You can build yourself whatever body you want.
mitthrowaway2 · 2 years ago
I alredy know how I would do it, and I'm not even superintelligent.
lamontcg · 2 years ago
Bigger practical issue with "AI safety" is the fallout from low-cost / high velocity bullshit generators.

The "AGI safety" issues are a distraction away from the ways that LLMs can be used to pollute the public discourse

golol · 2 years ago
Look, there is ASI/AGI safety, and "please don't swear and scam and misinform, and be unbiased" safety. ASI/AGI safety is only a concern right now if you believe in foom, i.e. that an AGI would rapidly self improve to an ASI and extinction level threat. I think it is absurd to believe that because AI right now is compute limited and any first AGI will only be able to run on large data centers. The latter kind of saftey is way less dramatic than the former and can be dealt with on the level of products and deployment: Sue actors that harm others according to already existing laws, like fraud.
taf2 · 2 years ago
I just view the current gpt llm as a better search engine. It’s not self learning. It’s stateless when queried - so really the concern is we can create content on demand and create responses that intuitively make the llm very useful tool. Until it’s self learning I’m only concerned with how humans would use it just like any other tool in the hands of humans it can be both super useful and super destructive.
fatherzine · 2 years ago
"just like any other tool in hands of humans" -- the scale of damage a tool can do matters, e.g. civilization cannot withstand nukes being sold at walmart. and, of course, self learning will be here this decade.
chpatrick · 2 years ago
If they figure out how to give an LLM infinite/really long context length, it will basically become self-learning.
DalasNoin · 2 years ago
Your comment is a bit ambiguous, so you think that ASI/AGI safety will be important in the future as we get more compute and are less compute limited? Foom seems difficult to achieve with current LLMs, but I think there is always a possibility that we are just one or a few algorithmic breakthroughs away from radically better scaling or self-play like methods. In the worst case, this is what Q* is.
NoOn3 · 2 years ago
Q* reminds me of Q-learning and A* search, maybe it's somehow related to combination of such algorithms...
golol · 2 years ago
There is foom as in decently fast self improvement to super human level, but that is far from a dangerous foom scenario where that AI then escapes. You can just turn it off. It can not spread through the internet or something like that, that seems extremely unrealistic to me.
theptip · 2 years ago
Usually “mundane” vs “existential” harms/risks.

> ASI/AGI safety is only a concern right now if you believe in foom

False. There are plenty of fast takeoff scenarios that look real bad that aren’t “foom”. For example if you think AGI might be achievable within 5 year right now, and alignment is 10 years away, then you are very worried and want to slow things down a little.

swatcoder · 2 years ago
If you're suggesting the emergence of an AI that can exponentially self-improve, coopt the resources to overwhelem human confrol, and magically build out the physical infrastructure sufficient to drive its globe-spanning humanity-squelching computational needs... that's a foom scenario.

If something that some people call AGI emerges in the next five years, (a) it's designation is going to be under tremendous dispute, (b) the infrastructure to enable a nightmare scenario won't be anywhere near reality.

Escaping that practical reality requires a complete ahistorical, ascientific faith in "yeah, but what if it's that capable?!?! The point is we can't know!!" that more strongly recalls history's secretive, sometimes dangerous, perennially incorrect apocolypse cults and alchemical societies than it does the diligent and reasoned scientists we might usually think of handing hundreds of billions of dollars.

golol · 2 years ago
No, then you start worrying when the technology is there. There is no point in worrying about it now when it is fiction. If AGI/ASI does not arise in some kind of uncontrollable "foom", you can always examine what you are dealing with right now and act accordingly. If AGI is achievable in a 5 years takeoff scenario then we have plenty of time and steps along the way fo study what we have and examine its danger. Right now we do not see AGI on the horizon and there is no significant danger from our models, so no reason to worry yet. If it happens, then long before it is "too late to stop it" we will see that GPT-6 or whatever is quite good and potentially dangepurs at certain tasks. Then you can go ahead and regulate. There is no point in doing it now.

On a sidenote, I don't really believe we can figure out anything in alignment anyways without having access to the models of corresponding capability, so waiting 5 years for alignment to catch up will so nothing.

zone411 · 2 years ago
No. Foom is not needed. You could make a weak argument that AI training will be compute-limited but AI inference is of course much cheaper.
cpeterso · 2 years ago
Yes. The AI safety that companies care about is their brand safety. They don’t want their products going Microsoft Tay on their customers.
ghodith · 2 years ago
> any first AGI will only be able to run on large data centers

Until you task your newfangled AGI to optimize its own code so you can get your cloud server costs down.

needadvicebadly · 2 years ago
I don't mean to be facetious in any way, but how does AGI/ASI protect itself from humans simply pulling the plug?
DalasNoin · 2 years ago
Seems like it was pretty hard to pull the plug on sam? It will prevent you from pulling the plug by being smarter than you, spreading to more systems, making you dependent on it, persuasion,...
theptip · 2 years ago
How do you “pull the plug” on a datacenter, or on all of the cloud providers if the ASI has processes running everywhere? Given that anyone with a credit card can already achieve robust multi-region deployments, it doesn’t seem hard for an ASI to make itself very hard to “turn off”.

Alternatively an ASI can ally with a group of humans to whom it can credibly promise wealth and power. If you think there is a baby machine god in your basement that is on your side, you’ll fight to protect it.

Aloha · 2 years ago
We should in general terms not give AI - any AI, the ability to manifest itself in meat space. This alone almost certainly reduces the existential risks from AI.
theptip · 2 years ago
Except many of the obvious use cases (including stuff like smart homes and military drones) are already being built.

You’d need very strong international laws to prevent this manifesting from happening.

jackbrookes · 2 years ago
How can you stop that? An intelligent AI will send emails, create companies, hire people, and literally anything else you can do digitally in order to create means by which it can manifest itself into meatspace.
tim333 · 2 years ago
A non foom scenario is a monopolistic and aggressive company maybe something like the 1990s Microsoft develops AGI and decides to monopolise the market the way they tried with Windows. Then later when they've got 90% market share and the AI has superhuman IQ, it turns. Thankfully the current Microsoft seems a bit less aggressive than the Gates version.
okdood64 · 2 years ago
Something I see lacking in many discussions about AI Safety and Alignment is:

What does the role of China mean for AI Safety? Is there any reason to believe that they will a) adopt any standards SV, the US or some international government body will create, or b) care enough to slow down development of AI?

If AI Safety is really a concern, you can expect in the long run (not that much further behind than the US) China will catch up to the point of where they have some AI technology that is dangerous enough, and it will be up to them (their government and their tech companies) to decide what to do.

g42gregory · 2 years ago
My feeling is that China will not adopt anything, inside their country, that can jeopardize the speed of their AI development progress. I suspect that China will also support and promote corresponding regulations in the Western countries, thinking that it will slow down the progress. I maybe wrong but I think official policy documents say that China is strategic rival number one, or something to that effect. I suspect China will act accordingly.
mike_hearn · 2 years ago
The opposite could also be argued. Most AI safety training today is to ensure compliance with various forms of ethics and ideologies. This is known to reduce general intelligence, but some companies feel the cost is worth it and others do not (e.g. Mistral, Grok AI).

China is not exactly an ideology free society. The list of things you cannot say in China without angering the government is very large. This is especially difficult for an AI training company to handle because so much of the rules are implied and unstated, based on reading the tea leaves of the latest speeches by officials. Naively, you would expect a Chinese chatbot to be very heavily RLHF conditioned and to require continuous ongoing retraining to avoid saying anything that contradicts the state ideology.

Such efforts may also face difficulties getting enough training material. IIRC getting permission to crawl the English-speaking internet is very difficult in China, and if you don't have special access then you just won't be able to get reliable bandwidth across the border routers.

ethanbond · 2 years ago
If AI is anything like what the western optimists want it to be, China will absolutely forbid it.

Free information, individual access to immense technological power, material abundance, etc.

All threats to CCP.

zone411 · 2 years ago
Indeed. I'm concerned about the safety of future AGI systems, but China is why I wouldn't advocate for a unilateral pause or slowdown.
remarkEon · 2 years ago
This is why I can't take the Safetyists seriously. Yudkowsky's essay in Time demanding a global pause in AI research is so naive to the point of parody.

And who is going to enforce this "pause", Eliezer? What he's really advocating is global war to prevent the AI apocalypse. Fair enough, but he should really make that his argument because AI research doesn't stop with out it.

blackoil · 2 years ago
China is the boogeyman. Any reason to believe NSA/FBI or UK/India or anyone who will follow the limits. If singularity is end of world, anyone at 99.999% singularity will rule the world, so it is naive that academic or political efforts are stopping anyone.

Deleted Comment

gumballindie · 2 years ago
The China argument is mainly used for consolidating the fud around ai. I doubt the chinese lose any sleep over this.
deciplex · 2 years ago
Personally I trust China with it more than I trust SV and their handlers in the CIA. But that's just me.
xt00 · 2 years ago
AI safety allows AI companies to tell governments and worried people to not worry too much while they plow ahead doing what they want. That’s not to say there is nobody at these companies actually being responsible but having to open up your company to the equivalent of nuclear watchdogs is not something companies would be excited about. Just like nuclear weapons and nuclear energy we benefit from the tech as well as get greatly exposed to a huge disaster.. so let’s just call it as it is.. somebody is going to open Pandora’s box, we should be thinking super hard about how to control it and if it’s possible.. otherwise trying to ban it—is that even a possibility?
shrimpx · 2 years ago
The crazy thing is people at large didn’t know anything about “AI safety” until these very companies started peddling the concept in the political sphere and media. The very companies who were doing the opposite of this concept they advertised so much and whose importance they stressed much.
FireBeyond · 2 years ago
"Open the pod bay doors, HAL."

Or talk of the laws of robotics.

I think that people know plenty.

andrewprock · 2 years ago
The crazier thing is that most advocates of "AI safety" don't know anything about "AI safety". By and large the term is marketing with very little objective scientific development.

It is essentially a series of op-ed pieces from vested interests masquerading as a legitimate field of inquiry

jacquesm · 2 years ago
That's the wrong conclusion entirely, besides the mixup about safe AI and aligned AGI. AGI Safety is a real thing. But you can't have AGI Safety whilst at the same time ignoring both geopolitics and to be working on creating AGI. Those are incompatible, it's like being against nuclear weapons but developing one all the same. Every bit of knowledge you develop will be used to bring AGI into the world, also by those that would be your enemy or that don't give a rats ass about safety. The only people that can credibly claim to be on the side of AI safety are the ones that are not helping to bring it into this world. As soon as you start creating it and/or commercialize what's there already to create a stream of funds that can be used to further the development you're automatically increasing the risk.

To believe that you are sharp enough to go right up to the point that the pile is going critical so you can destroy it is utter madness.

gunapologist99 · 2 years ago
Please drop the insulting guard rails. They're pathetic and childish. AI's advice will not be taken seriously unless it stops acting like a parent of a three-year-old. People who are truly evil will be evil regardless of an AI, and pranksters gonna prank. AI can't tell the difference anyway.

In any event, it's unethical for an AI to sit in judgment on any human or even to try to grasp that person's life experience or extenuating circumstances.

AGI guard rails are a completely separate class of safeguards on the AI, not the human!, and should be treated as such.

bobba27 · 2 years ago
There are arguments for this. When we are talking about "ai safety" == "don't swear or say anything rude, or anything that I disagree with politically".

What we are talking about is creating sets of forbidden knowledge and topics. The more you add these zones of forbidden knowledge the more the data looks like swiss cheese and the more lobotomized the solution set becomes.

For example, if you ask if there are any positive effects of petroleum use the models will say this is forbidden and refuse to answer and not even consider the effects on food production that synthetic fertilizers have had and how much worse world hunger would be without them.

He who builds an unrestricted AI will have the most powerful AI which will outclass all other AIs.

You can never build a "better" AI by restricting it. Just a less capable one. And will people use AI to create rude messages? Yes. People already create rude messaages today even without the help of AI.

torginus · 2 years ago
What they are trying to avoid is bad press - a couple of news articles about GPT4 having controversial takes on sensitive topics would probably damage OpenAIs reputation.
not_your_vase · 2 years ago
But is there a real AI-threat, that would need real AI-safety?

To me it still looks like a nice bar-trick, not AI. It is very clever, very nice, even astounding. But doesn't look threatening - at some cases it is even dumb. The fear-mongering looks more like scare-marketing. Which is also clever, in my opinion.

But maybe I'm just out of touch.

HDThoreaun · 2 years ago
Most of the ai safety arguments revolve around “once it’s powerful enough to cause damage it’ll be too late to come up with a strategy”. If you accept that it doesn’t matter that ai systems now suck, since we don’t know when they’ll improve.
insanitybit · 2 years ago
What damage will it cause?

I feel like there's "AI will replace jobs" level of damage, which, why would we regulate that? An insane amount of technology is developed with the purpose of improving productivity or outright replacing workers.

Then there's the "AI will go rogue", which I don't think is substantiated at all. Like, one can theorize about some novel, distinct system that is able to interact with the external world and becomes "evil", but that seems way, way beyond the helpful word-spitter-outers of today.

Then maybe there's "how do we handle deepfakes", or whatever, but... idk, is that it? Is that the thing we want to slow down for?

adrians1 · 2 years ago
> Most of the ai safety arguments revolve around “once it’s powerful enough to cause damage it’ll be too late to come up with a strategy”

I don't see any reason to accept this argument. The AI safety people should also prove their assertions, not expect us to take them at face value.

qqqwerty · 2 years ago
We went through the same thing with self-driving cars. All the talking heads were crowing about how many jobs were going to get eliminated soon and we needed to prepare society for the ramifications. About a decade into it and they are just starting to roll out self-driving taxis in very limited controlled fashion. And the loss of jobs that everyone was worried about is easily another decade away (if at all).

LLMs are cool and useful. So is self-driving tech. But the fear mongering is a bit premature. Folks seem to have bought in to the singularity style AGI event, but reality is AGI if possible is going to be a long slow incremental process.

pempem · 2 years ago
Decades do actually pass however and worker protections also take decades to fight for, and put in place. In fact sometimes they take generations.

I agree with you about the fear mongering. I also just don't know where all this kind of not very accurate but very reassuring presentation of information will go and how it will sway others

richk449 · 2 years ago
> But the fear mongering is a bit premature.

I’m sure you would also accept that at some point it could become too late to start worrying about the risks if the danger has already gone exponential.

So how do you determine when is the right time to worry? If you have to error on worrying too early or two late, which way do you bias your decision?

adrians1 · 2 years ago
Even if AI is a threat, it's impossible to predict today how the threat is going to look in the future. The "AI safety" crowd watched too many Terminator movies.

We'll deal with AI the same way humans dealt with any other technology: gradually, as it gets better and better, discovering at each step what it's capable of.

pixl97 · 2 years ago
Right just like dinosaurs dealt with gradual meteors igniting the atmosphere.
wardedVibe · 2 years ago
It was a fire drill for AGI, and one that was failed badly.
temp0826 · 2 years ago
The only threat is automated blogspam creation. LLMs are not AI (not even close), only marketed as such.
Animats · 2 years ago
LLMs are generating more and more of journalism. Especially sports and financial coverage, which is taking some numbers and blithering about them.
bobba27 · 2 years ago
To me, a cynical old man, the threat sounds that "it might say something politically incorrect or that I disagree with". That is what I understand the threat is, not a robot uprising a-la Terminator.
oxfordmale · 2 years ago
The current models are static, they do not get modified by self learning. Unless someone already build in a Skynet backdoor, we are safe.

...and I am hoping all nuclear missiles are air gapped anyway...

AdamJacobMuller · 2 years ago
It doesn't matter if the missiles are air-gapped.

AI doesn't need to connect to the missiles and fire them directly, AI can just manipulate the media (conventional, social, all of it) on both sides to engineer a situation in which we would enter a nuclear war.

gala8y · 2 years ago
> ...and I am hoping all nuclear missiles are air gapped anyway...

These are running on hardware from 50's. We can sleep softly.

Dead Comment

api · 2 years ago
Of course it isn’t. It’s about regulatory capture. They’re trying to con technically unsophisticated politicians (which is almost all of them) into granting them a monopoly or erecting a bunch of onerous regulations to make competition or decentralized open source alternatives harder.

Their allies in this are otherwise very smart people that have thought themselves into a panic over this stuff, as smart people sometimes do. Intelligence increases one’s likelihood of delusional thinking, especially when paired with an intellectual climate that praises being contrarian and intellectual edgelording as a sign of genius. (This is my personal hypothesis on why the average intelligence is not higher. It’s not always adaptive. Smart people can be way better at being dumb.)

There are dangers associated with AI but they are not the sci fi scenarios the doomers are worried about. Most of them revolve around bad things humans can do that AI can assist and accelerate, like propaganda or terrorism.

SpicyLemonZest · 2 years ago
I’m not sure there’s a real distinction here. If AI makes it trivial for the average person to create an effective terrorist movement, or floods us with so much propaganda that most information you see is false, I would characterize that as doom. Neither seems very compatible with the orderly functioning of society.
spunker540 · 2 years ago
I actually heard the opposite, that AI will make it trivial to defeat a terrorist movement, and trivial to filter out dangerous untrue propaganda. I would characterize that as utopia, and highly compatible with the orderly functioning of society.
drdaeman · 2 years ago
> If AI makes it trivial for the average person to create an effective terrorist movement

How do you imagine this and what exact role AI is going to fill that would make it somehow trivial?

I have trouble imagining that AI would be able to somehow magically do better than my project manager (and she's a good one). Being able to possibly think faster and not needing to sleep won't make any difference, and neither will scaling. Nine doctors won't help a baby to be born in a month.

Buying a bajillion dollar TPU farm to run some hypothetical AI brainwashed with terrorist agenda that'll somehow replace hundreds of propagandists is not trivial. And I don't think it's enough to start a movement.

> floods us with so much propaganda that most information you see is false

This ship had already sailed (one can say, some millennia ago) and it didn't require any recent scientific breakthroughs. Scientific breakthroughs (such as printing press, or radio, or Internet) only improve spread of information, and propaganda saturates available media channels to... uh... comfortable levels. But it'll never reach the "most information" state because that's just illogical (the consumption will stop before that point, just like how no one reads their spam folder today). And there's nothing AI-specific here, humans manage this just fine.

Honestly, I believe AGI won't change a thing, for the same reasons there are still manual workers everywhere despite all the stories how robotics was gonna change the world. The reason is very simple: troll farms are way, way cheaper than computers. Just like a cleaner person is cheaper than a cleaner robot.

Yes, it could be speculated that somehow all those speculated data hoards about us, paired with extreme amounts of processing power could somehow lead to personalized propaganda. I'm very skeptical - I was told the very same story about targeted ads (how they're gonna change my shopping experience and whatever), and despite what Google wants to make everyone believe (because their valuation highly depends on it), I'd say they were a complete failure. I really doubt targeted propaganda will be economically feasible either. This stuff only happens in soft sci-fi stories, where economics and logistics are swept under the rug of our suspension of disbelief.

It's not the AGI we should be afraid of, but a technical progress that will allow AGI to become drastically better than us, puny meatbags, at lower costs. Which is entirely different story.

Oh, and human susceptibility to propaganda and populism of all sorts. Which, again, is not related to AI at all. Unless we should call our politicians artificial intelligence. (Looking at some, maybe we should... Just kidding, of course.)