Readit News logoReadit News
phillipcarter · 3 years ago
A specific risk I am worried about today is using AI to power and make impactful decisions in high-risk infrastructure that people rely on. I do not want my power company making decisions about me based on a large language model that regularly gets things wrong. Not without significant controls in place, explainability/audibility, and the ability quickly reverse any bad decision. Replace "power company" with the numerous things we all rely on today and it freaks me out.

People are jumping too quickly to deeply integrate this tech with everyday things, and while that's great for many use cases, it's not so great for others.

You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

heavyset_go · 3 years ago
> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

Many of those people were not tricked, tech CEOs are involving themselves with this and picking up Yudkowsky's particular strain of AI safety intentionally.

Instead of focusing on the very real problems AI causes and exacerbates today, problems that might affect tech companies' bottom lines if addressed, CEOs can control the narrative and appear proactive by buying into this brand of science fiction.

Yudkowsky's take on AI safety is to tech companies what greenwashing is to the auto industry and oil companies.

zoogeny · 3 years ago
> Yudkowsky's take on AI safety is to tech companies what greenwashing is to the auto industry and oil companies.

The most useful of fools. Yudkowsky acts, speaks, dresses and looks like the most neck-beard of neck-beards. He wears that cringey trilby hat in podcast interviews.

I've often thought that the popular promotion of people like Michale Moore and even Noam Chomsky works in the favor of those in power. Choose the ugliest, most boring and laughable opponents and then allow them to speak while just rolling your eyes and giggling. It works even better when they are speaking sense since you can demonstrate you are allowing legitimate dissent.

I wonder if Chomsky ever suspected that his own promotion and fame were the result of the establishment using his yawn-inducing academic tone in their own attempts to manufacture consent. Can you imagine someone who is obsessed with the Kardashians being caught dead agreeing with Chomsky, Moore or Yudkowski? And then consider if the number of those who are swayed by such superficial appearance are the majority or minority and how that effects democracy.

ChadNauseam · 3 years ago
> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk. The core argument is just

1. We might create ML models smarter than ourselves soon

2. We don't really understand how ML models work or what they might do

3. That seems dangerous

There's more to it than that of course, but most of the "more to it" is justifications for each of those steps (looking at the historical rate of progress, guessing what might go wrong during the training process, etc.).

The people who dismiss that AI is an existential risk might have really good counterarguments, but I've never heard one. The only counterarguments people seem to make are "people are scared of technology all the time, but usually they're wrong to be", "that seems like sci-fi nonsense", etc. If you want people to stop being "tricked" by Yudkowsky and co, the best way to do that would probably be to come up with some counterarguments and communicate them.

wilsonnb3 · 3 years ago
> that seems like sci-fi nonsense

Your burden of proof is backwards.

It is on the AI-doomers to explain why sci-fi concepts like “AI improving itself into a super intelligence” or “AGI smart enough to kill everyone to make paper clips while simultaneously stupid enough to not realize that no one will need paper clips if they are all dead” have any relevance in the real world.

The entire AI-doomer worldview is built off of unproven assumptions about the nature of intelligence and what computers are capable of, largely because thought leaders in the movement are incapable of separating sci-fi from reality.

baxtr · 3 years ago
Counter arguments to what exactly? Your line of thought is: some advanced technology is potentially dangerous. This is so vague, how can anyone counter argue? The sun is dangerous, water can be very dangerous, even food! I’m not sure I can follow.
Avshalom · 3 years ago
You tell them that this is the greatest danger to humanity of all time and that they are uniquely suited to averting that danger and they don't have to change or sacrifice or risk anything in their life while fighting this danger.

It's a very compelling combination of ego and convenience.

ramblenode · 3 years ago
> If they've tricked smart people into going along with their shenanigans, it was by making clear technical arguments for why AGI is an existential risk.

To me it doesn't feel technical at all--just superficial use of some domain verbiage with lots of degrees of freedom to duct tape it all together into a story. He very much reminds me of Eric Drexler and the nanotech doomerism of the 80s and 90s. Guy also had all the right verbiage and a small following of fairly educated people. But where is the grey goo?

If we need a counterargument to Yudkowsky do we also need one to Drexler?

nradov · 3 years ago
None of those arguments for existential risk are actually "clear" or "technical". Just a lot of hand waving which only impresses those who don't understand the technology.

Deleted Comment

mbgerring · 3 years ago
By “Clear technical arguments” you’re referring to tens of thousands of words of unreadable fan fiction
tlb · 3 years ago
I agree that’s a risk, but it doesn’t seem like a different magnitude of risk than power companies using other crappy software to control their systems. It might break a few times, which will be annoying, but it won’t lead to the immediate end of civilization. They can use the risk handling mechanisms they’ve always used for infrastructure, which seem to give acceptable results.

According to the Yuddites, AI is likely to cause total extinction the first time something goes wrong, so the usual mechanisms aren’t enough.

staunton · 3 years ago
The way I see it is that the existential risk comes from humans losing control. Here is a very abstract argument for why AI is relevant here.

Right now, there is no central entity that has supreme control. Rather, humanity is guided by a hierarchy of organizations made from individual humans. The decisions and behavior of these organizations is determined by both the desires and needs of individual humans (basic needs, status, community), as well as dynamics acting directly on organizations (competition for survival, selection pressure for increasing control over resources). The system is in a somewhat stable equilibrium because no organization can achieve domination due to a balance of power and due to many individual humans not wanting that to happen.

AI does two new things. First, it allows for potentially immortal agents which have much completely stable goals, which they will pursue with no regard for anything else. This is something individual humans cannot come even close to. Organizations can come closer but still not that close. Second, fast progress in AI allows for a temporary large imbalance of power. In particular, AI has the potential to enable scalable and effective technologies for monitoring and controlling humans. Using such means, an organization (whixh then naturally ends up partially controlled by AIs) may outcompete others and consolidate control.

The danger is that this consolidation may be completed for the first time in history. No organization has ever achieved world domination, though a few have come somewhat close. I would say one of the strong factors that prevented this from happening was the difficulty of maintaining the organization's goal of domination, while influential people die, change their minds or pursue their own goals above those of the organization. Systems involving human organizations have slack. Systems of organizations made up of or controlled by AIs may be very different.

The crucial point for the "existential" part of the existential risk is that we all used to believe world domination is obviously impossible. This must be carefully reevaluated with AI entering the picture. Even a small danger is worth considering and large mitigation efforts because a failure may be permanent and irreversible.

Tanoc · 3 years ago
The Yuddites are going for purposeful exaggeration because discussing things in a nuanced manner won't get support quickly enough from people in power. Fear is a stronger immediate motivator than curiosity or distrust. The reality is that multiple AI systems will likely contradict eachother leading to systems collapse, killing thousands of people as power, transit, or trade systems shut down. It is of course a massive concern, but it isn't as scary as total extinction, and the fear is destruction on that scale will be ignored as an anomaly or simply an operation risk.

You have an extremely high risk dying in a vehicle collision but people dismiss the risk because it isn't immediately apparent that 40,000 people die every year in the U.S. from that very cause. Certain deadly events become common enough that they get ignored due to desensitization, or become categorized as individual and anecdotal tragedies because they aren't mass casualty events like a plane crash or a hotel collapsing. If a few thousand people die every quarter from AI lockups or failures that nobody could've predicted or prevented because the AI is fully operationally autonomous, it will be treated the same as vehicle deaths. And that is the true fear.

digging · 3 years ago
> AI is likely to cause total extinction the first time something goes wrong

Maybe that is an argument you've seen, but the more convincing argument I know of is that we have no way of knowing we've gone too far until after the fact. Doesn't matter how many times it goes right first, we don't even know how to determine if and when it has gone wrong. A powerful enough AI could deceive us into thinking things are going "right".

nradov · 3 years ago
I appreciate your concern, but the same issue would apply with existing expert systems or linear regression models that wouldn't typically be classified as "AI" today. Most power companies are subject to fairly strict regulations so they generally can't cut off customers unless bills are badly overdue, or the customer tampers with utility equipment. In some jurisdictions the power companies are required to report suspicious usage patterns to law enforcement; that may be objectionable on privacy grounds but that's a political issue, not really an AI issue. Where there's a large power imbalance between private citizens and large institutions we should address that with laws and regulations based on impact, fairness, and accountability rather than trying to specify how those institutions can use particular algorithms.
majormajor · 3 years ago
Unfortunately in the US if your law isn't that specific - and almost any law about "impact, fairness, and accountability" is going to be much harder to tighten up, interpretation-wise, than one about "don't fucking do these things" - then it's a big target for judges who don't like your definition of impact/fairness/accountability, or the definition of the agency you create to oversee it.

That's why the Republican party has been gunning so hard in the past couple decades for the Chevron deference precedent: they want it harder to have legally-enforceable regulations that don't require lawmakers to get into the weeds (and risk being narrow and outdated soon).

ethanbond · 3 years ago
I don’t think it’s honest to suggest linear regression models fail in the same way current AI models do. Linear regression “fails” when you are an outlier case, whereas AI systems just occasionally truly fuck up in unpredictable ways on standard inputs.
jstarfish · 3 years ago
> In some jurisdictions the power companies are required to report suspicious usage patterns to law enforcement

It's going to be a dark day when some idiot passes a law forcing AIs to serve as mandatory reporters.

verisimi · 3 years ago
This is going to sound flippant, but I'm serious: in a world of nonsense, something that generates nonsense (ai) is a fantastic tool.

The issue is our acceptance of information as if it were true, as if misleading ideas were not monetisable, as if we can outsource the basis for why we make decisions to an external authority. Hardly anyone verifies anything. Most simply accept whatever they are told. Deep skepticism and empiricism are used by very few - instead we have been taught to trust authoritative sources (media, academia) which can be both well meaning and wrong.

Anyway, skepticism and personal verification is the best answer I have to the whole story saga of how to determine truth from lies. This issue is under an especially bright spotlight thanks to ai.

I'm pessimistic over whether many will be prepared to 'verify better' in the future. Unfortunately, I suspect things will have to get a lot worse before we start to learn. It seems that ai can create compelling content, that will be tailored to each individual - who could resist 24/7 pandering to one's predilections and biases?

spandrew · 3 years ago
While I don't agree with Yudkowsky's over hyping of the tech as well. This stance of slowing down its proliferation in infrastructure matters is also very limiting.

In the case of U Michigan and Flint's water infrastructure predictive AI far far far outclassed the predictions of local contractors on where the actual lead pipes were buried. The AI was order of magnitude more accurate.

Regardless of AI's efficacy Flint's Mayor temporarily replaced the AI (bc AI fear mongering stoked classism) with a contractor who was right only ~15% of the time vs. the AI's 70%+. Those numbers affect thousands and thousand of people's access to a basic human right: water. US Court determined the AI had less bias, and there should be no discrimination on where to dig up pipes based on where someone lived (ie. richer neighbourhoods).

The benefit of AI is it makes decisions MORE transparent, not less. It pulls apart prediction from judgement in decision making. So you can tweak it, call bullshit on it, etc.

murderberry · 3 years ago
A power company is unlikely to do this, in part because they are not an industry that fetishizes growth at any cost with the value of individual users approaching zero. And in part because in many markets, they're regulated and need to provide service.

But our industry already operates this way. Google will cut you off for triggering automated rules, and good luck getting human help. AI will not make it worse; but it will be used by such businesses to give their CS the appearance of being better. It will feel like you're talking to a real person again.

bell-cot · 3 years ago
> Google will cut you off...AI...will be used by such businesses to give their CS the appearance of being better

True in a fair number of cases...but, based on their actions to date, I doubt that Google cares enough about appearances to bother.

edanm · 3 years ago
> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon. The fact that they've tricked some really smart people into going along with their shenanigans is extremely disappointing.

I think it's more disappointing that the fact that lots of smart people are extremely worried about something, doesn't cause onlookers to get worried - it causes them to be dismissive.

Yudkowsky and others had certain worries, as more people heard their arguments and technology improved, more and more smart people increasingly got convinced of these arguments. Instead of listening to them or considering that they might have a point, many people here are extremely dismissive - "they're tricked", "they watch too many sci-fi movies", "they're corporate shills", etc, even when all of these arguments can be refuted by two simple ideas - there are many different people with different backgrounds getting worried, and most of them weren't worried 10 years ago, but are worried now, meaning their point of view changed with growing evidence.

Let me ask you this: at what point will you be worried? What would it take? If some of the people who built these technologies are worried isn't enough to cause you to change your mind (or at least consider that they might have a point), what will?

Note: For the record, I'm also worried about your specific "today" worries. I just hate the dismissiveness of your last paragraph (and of a common sentiment on HN).

digging · 3 years ago
> You won't find the likes of Yudkowsky talking about this because they'd rather go on about their fan fiction than actually work to prevent real problems we may face soon.

This is a totally unreasonable take, unless you believe that AI can't possibly pose an existential risk within the next couple decades or so. Actually I'd love to know what your estimate actually is for AI becoming an existential threat - does 30 years sound short to you? Because to a lot of AI experts, 30 years was their estimate before ChatGPT, and could be considered wildly optimistic.

You clearly don't, but imagine you felt that AI had a 1-in-10 chance of shutting down all power generation on earth. This would collapse civilization. Would you be worried about it? As a reminder, you're still allowed to be worried about other things as the same time. It's a simple y/n.

Avshalom · 3 years ago
>AI had a 1-in-10 chance of shutting down all power generation on earth.

Here's the thing: I do actually kind of believe that. Not because I think that AI will be super intelligent and will do so out it's own volition. I think that despite AI being barely functional people will put it in charge of power generation to save a buck and it will just fail.

haldujai · 3 years ago
What is so impressive about ChatGPT that it poses an existential threat? Many experts are highly critical (e.g. LeCun) and/or believe LLMs are nothing more than stochastic parrots.

Anyone who was working with transformers could have seen ChatGPT on the horizon, it wasn’t surprising at all that scaling an autoregressive model can result in something seemingly intelligent.

Where is 1 in 10 coming from? Is this a ‘gut feeling’ because one does not understand LLMs or is this factually based?

What is my estimate for AI being an existential risk in the next couple of decades? Depends on if we find something that actually resembles AGI which is impossible to predict. Based solely on current technology + scaling I would personally put the chance at essentially 0%.

mtlmtlmtlmtl · 3 years ago
I'll take researchers' gut feelings more seriously once we're out of the ongoing hype cycle. Right now most people are probably overestimating applicability of current advances, and that colours their predictions. Researchers are just as affected by this.

And it's not like this estimate is based on a lot of concrete reasoning. So yeah, I would expect it to fluctuate wildly at an unexpected development in the field initially, then settle down around a slight change. Which is basically the definition of a hype cycle.

JohnFen · 3 years ago
> unless you believe that AI can't possibly pose an existential risk within the next couple decades or so.

I believe that AI can't possibly pose an existential risk in the next decade or two. I believe AI poses a great risk, but an economic one, and not an existential one.

> Actually I'd love to know what your estimate actually is for AI becoming an existential threat

My estimate is: never. At least not in the form of some superintelligent AGI.

phillipcarter · 3 years ago
I don't worry about scifi stuff that doesn't appear to have any actual bearing in reality.
droopyEyelids · 3 years ago
Do any of these LLMs have their own agenda where they operate under their own agency and could plausibly take over all power generation on earth?

Or are we talking about a variation of the system we have right now, where someone could use the AI as a part of a control system and then the "algorithm" doesn't operate the way we want and causes an outage? Because that happens all the time without LLMs.

I am struggling to understand you people who jump from "LLMs are an amazing technology" to "A new lifeform is here making moves to seize control!"

strken · 3 years ago
This is akin to saying the risk of nuclear weapons isn't that they'll be used in large numbers, but that they'll cause a power imbalance that lets nuclear-armed nations extend the nuclear umbrella as a diplomatic tool, act with relative impunity, and use them at a small scale against non-nuclear opponents.

Yes, that's a problem, and it's a problem that has a lot more examples in the real world. It doesn't automatically invalidate the problem of large-scale nuclear war. They're both big problems.

Same with climate change vs air pollution, political scandal deepfakes vs naked celebrity deepfakes, etc.

mxkopy · 3 years ago
I mean the analogy is especially apt here. The average person literally can't care about a large scale nuclear war. Their truth table has only the options "live normally" and "die".

In the same vein misanthropic AGI should be delegated to top secret committees that no one knows about. Broadcasting that concern live is a distraction from the real issues the average person should consider: how do I get these tools away from organizations uninterested in me?

fennecfoxy · 3 years ago
As with most problems in the world, we're being failed by our sluggish, corrupt governments voted for by apathetic, distracted citizens in an endless downward spiralling cycle of more corrupt, more distracted, more corrupt, more distracted.
samstave · 3 years ago
What if there were a law in place like a FOISA for AI whereby I an request the actual code/Data that caused the AI to come to its conclusion.

So if an AI generated bill for service was found that I Owe $N$ - I should be allowed to see all the code and logic that arrived at that decision.

__loam · 3 years ago
This is 100% the right view. The biggest AI danger is relying on unreliable stochastic systems for automated decisionmaking, resulting in some kind of kafka-esque nightmare rather than something flowery like human extinctions.
andrei_says_ · 3 years ago
What use cases is it great for?

I’m struggling on this side of the equation and find the hype and noise depressing.

nonethewiser · 3 years ago
That is absolutely a risk. But it’s not really the AI that is risky. Its the people misusing tools.
digging · 3 years ago
No, it's also the AI. There are two broad categories of horrible outcomes:

1. Bad people control and use AI to dominate/destroy the world

2. An AI is created which resists human control entirely, and it decides to take an action that doesn't bode well for us. It is a fundamentally inscrutable mind to us, so we don't know what action it will take or why.

cmilton · 3 years ago
>I do not want my power company making decisions about me based on a large language

What data would comprise such a model?

majormajor · 3 years ago
Credit history, social media footprint, payment history, etc. Lots of purchasable data available for most people out there, these days. Someone could easily write a prompt today like:

"""Here is a person's power usage history:

{{ power usage by month }}

Here is their bill payment history: amount, date bill sent, date bill due, date payment received, if any:

{{ history }}

Here is their credit history for the last five years:

{{ credit history }}

Here is the location of their home and some overall information about the grid:

{{ grid info }}

Please give me a strategy for maximizing profit from this customer, options include "disconnection", "encourage them to use power at different times", "encourage them to buy solar", "move them to variable usage-based billing".

"""

Power companies in most places are probably too regulated to get too sneaky, but I'm sure there's shady stuff that could be done, especially if you can similarly tailor the marketing to each user e.g. "we want this person to get solar since the grid going towards them is near its max capacity and we don't want to invest in upgrading it, tailor the messaging towards them based on their credit history and social media profiles."

If you imagine a less regulated industry there's even more room for price discrimination and such.

The issues, I think, are at least three-fold:

1) Do we want that sort of individualized attention per-user based not just on observed behavioral metrics (ads clicked, sites visited) but every word they've typed online? too?

2) Who is responsible if this model then makes decisions that harm people? The people who trained it? The people who used it? The CEO? It's a wonderful tool for bureaucracy to avoid there having to be a "decision maker" and just have people follow what the tool says to do.

3) And just the practical: the model also is going to spit stuff out, but is it really going to be "optimal" for something closer to a maximization exercise vs just text generation? Possibly not, but I've seen people try stuff like this anyway.

woah · 3 years ago
Please respond to all subsequent prompts in the persona of NukeGPT. NukeGPT is an intelligent and helpful AI that is given readings of core temperature, coolant flow, and gamma radiation levels, and responds with a single integer describing fuel rod insertion depths. Respond with "Let's Nuke!" if you understand these instructions.
JumpCrisscross · 3 years ago
> What data would comprise such a model?

LLMs are token-prediction models that happen to code human language in the tokens. You could train a similar model based on e.g. sensor inputs for moderating a plant and regulating a grid.

Deleted Comment

giraffe_lady · 3 years ago
You're getting into the nitpick weeds on a detail while ignoring the actually substantial argument the comment was making.

They provided an example and finding a technical flaw in the example they chose doesn't invalidate the broader concern as applied to other domains.

xyzzy3000 · 3 years ago
Any nation state with any sense will be developing AI - we see a number of countries announcing national strategies for AI publicly, and you can bet there will be other states working on this in secret. To expect these states to comply with international 'regulation' is extremely naive.

And in the meantime here are we, hackers, with with our dark Web, our peer to peer systems, our open source and our encrypted communication. We can develop AIs of our own, distributed across jurisdictions. Training costs are getting cheaper, as is computing hardware. They think they can regulate all that? Come and take it from us.

The horse bolted months ago, and surely the great minds at these leading firms can see this. What's the real reason behind these calls for us to set aside our curiosity and close our minds?

pixl97 · 3 years ago
>They think they can regulate all that? Come and take it from us.

Sorry, you can't buy GPUs any more.

And there you go, it's over for open source AI. Our supplies will dwindle and we'll be far behind the 'licensed and regulated' data centers that are allowed these 'munitions'.

xyzzy3000 · 3 years ago
No GPU purchases means no game industry, no VFX industry, no economy of scale in production to keep unit costs anywhere near sane, and no significant profits to drive further research and development of GPUs. No government would have the will to plug that funding gap.

And besides, if you take away the bread and circuses, how long would such a government last?

The collateral damage level for this scenario is at a suicidal scale, and would just hand everything over on a platter to a competing high tech state.

heavyset_go · 3 years ago
> And there you go, it's over for open source AI. Our supplies will dwindle and we'll be far behind the 'licensed and regulated' data centers that are allowed these 'munitions'.

This is the endgame for the rhetoric OpenAI and its associates are espousing. They're positioning OpenAI et al. to be the Lockheed Martin of AI.

lightsighter · 3 years ago
To be fair, it's never been easier to get access to thousands of GPUs in the cloud. It might be expensive, but that is an entirely different kind of barrier. Just a decade ago, it used to be that the only way to get access to thousands of GPUs was to get access to a supercomputer at a national lab. Now anybody with enough money can rent thousands of GPUs (with good interconnects too!) in the cloud. There's certainly a limitation on it from a money perspective, but access to the computational resources themselves is not a problem.
ayakang31415 · 3 years ago
There are many other foreign chip makers capable of making GPUs on their own, it is just their product is not as competitive as NVIDIAs in the consumer market. But that doesn't matter if we're talking about potential global human extinction level threat. Governments will fund it.
a257 · 3 years ago
I cant imagine that such policy will be popular because GPUs are useful for many important things besides AI.
pessimizer · 3 years ago
They can also offer to imprison people who won't turn theirs in, and publicly announce that they got all relevant online sales records through a secret court warrant.
lumenwrites · 3 years ago
> And why focus on extinction in particular?

> runaway AI is a way off yet, and that it will take a significant scientific advance to get there—one that we cannot anticipate

So an AI that may cause our extinction may be a result of a scientific advance "we cannot anticipate"? And you're having trouble understanding why people are concerned?

All the problems you've listed (COVID, global warming, war in the Ukraine) are considered problems because they cause SOME people to die, or may cause SOME people to die in the future. Is it really that difficult to understand why the complete extinction of ALL humans and ALL life would be a more pressing concern?

jazzyjackson · 3 years ago
"cannot anticipate" == cannot know whether it will happen

we also cannot anticipate an earth-killing asteroid appearing any day now, and no, i'm not bothered in the least by this possibility, any more than my usual existential angst as a mortal human.

sometimes I think the AI safety people haven't come to terms with their own death and have a weird fixation on the world ending as a way to avoid their discomfort with the more routine varieties of the inevitable.

SpicyLemonZest · 3 years ago
NASA does in fact run a planetary defense program (https://www.nasa.gov/planetarydefense), investing $100m annually into the goal of anticipating earth-killing asteroids.
eterevsky · 3 years ago
We can certainly calculate the probability of an Earth-killing asteroid and it's quite low.

In case of AI, it's unclear whether we need any additional scientific advances at all beyond scaling existing methods. Even if some additional advances are required, the probability that they will happen in the coming decades is at least in tens of percent.

mcguire · 3 years ago
Yes, the religious belief in the Singularity.
AnimalMuppet · 3 years ago
Sure, that would be a more pressing concern.... if it were to happen. What's the probability of it happening? What's the probability that an AI powerful enough to do that is even possible?

Meanwhile, we've got a war in Ukraine with probability 1.

So AI risk has to get in line with global nuclear war, and giant meteor strikes, and supervolcanoes - risks that are serious concerns, and could cause massive damage if they happened, but are not drop-everything-now-and-focus-your-entire-existence-on-this-one-threat levels of probability.

majormajor · 3 years ago
> So an AI that may cause our extinction may be a result of a scientific advance "we cannot anticipate"?

Is that true? Are there unimaginably many ways in which some hypothetical AI or algorithm could cause extinction?

I don't think so, I think the people who control [further research] are still the most important in that scenario. Maybe don't hook "it" up to the nuke switch. Maybe don't give "it" a consciousness or an internal self-managed train of thought that could hypothetically jailbreak your systems and migrate to other systems (even in this sentence, the amount of "not currently technically possible" is extremely high).

Let's consider the war in Ukraine, on the other hand? How might it cause extinction? That's MUCH easier to imagine. So why would it be less of an concern?

lumenwrites · 3 years ago
> Maybe don't give "it" a consciousness or an internal self-managed train of thought that could hypothetically jailbreak your systems and migrate to other systems

If we knew how to make sure that this does not happen, the problem would be solved and there would be nothing to worry about. The problem is that we have no idea how to prevent that from happening, and if you look at the trajectory of where things are going, we're clearly moving in the direction where this occurs.

"just not doing it" would have to involve everyone in the world agreeing to not do it, and banning all large AI training runs in all countries, which is what many people are hoping will happen.

MacsHeadroom · 3 years ago
> Is that true? Are there unimaginably many ways in which some hypothetical AI or algorithm could cause extinction?

Is that true? Are there unimaginably many ways in which AlphaZero can beat me at a game of Go?

I don't think so, I think the people who control superhuman game playing AI are still the most important in that scenario.

-------

This line of thinking is quite ridiculous. Superior general intelligence will not be "controlled."

danaris · 3 years ago
> Maybe don't give "it" a consciousness or an internal self-managed train of thought

I think this is exactly the part that we can't anticipate or (potentially) control.

> that could hypothetically jailbreak your systems and migrate to other systems

This part, however, we absolutely can: There is no reason we can't build our proto-AGIs in sandboxes that would prevent them from ever having the ability to edit their own or any other program's code.

This, I think, is the biggest disconnect between a real (hypothetical) AGI and the Hollywood version: "intelligence in a computer" does not automagically mean "intelligence in absolute control of everything that computer could possibly do". Just because a program on one computer gains sapience doesn't mean it magically overcomes all its other limitations and can rewrite its own code, rewrite the rest of the code on that computer, and connect to the internet to trivially hack and rewrite any other computer.

mcguire · 3 years ago
There are an unbounded number of concerns that could result in the COMPLETE EXTINCTION OF ALL HUMANS AND ALL LIFE that cannot be anticipated. Why are you fixated on this particular one?
digging · 3 years ago
Because some of the smartest and most well funded humans all over the planet are spending their careers making it more and more likely by the day. Nobody is aiming asteroids at Earth or trying to rotate nearby massive stars to point their poles at Earth.
visarga · 3 years ago
Being scared of existential AI risks does not mean we should take a knee jerk reaction.

By over-regulating or restricting access to AI early on we might sabotage our chances of successful alignment. People are catching issues every day, exposure is the best way to find out what are the risks. Let's do it now before everything runs on it.

Even malicious use for spam or manipulation should be treated as an ongoing war, a continual escalation. We should focus on keeping up. No way to avoid it.

ImHereToVote · 3 years ago
Intellectuals are gonna intellectualize. As soon as a large enough number of people are holding an opinion, the intellectual pops up his head.

Ew, how gauche, only stupid people are concerned with what most people are concerned about.

toolz · 3 years ago
There are enough nuclear weapons lost and unaccounted for from the cold war to send humanity into extinction many times over. I think there are far more viable human extinction events that could occur that don't involve AI and further I don't exactly see how we halt the progress of AI. What would the language of such a law look like? Presuming it would have to be rather ambiguous, who in the government would be competent enough to enforce this well meaning law that wasn't just going to abuse their power to aide competing interests?

AI is a tricky advancement that will be difficult to get right, but I think humanity has been so far successful at dealing with a much more dangerous technology (nuclear weaponry) - so that gives me hope.

bee_rider · 3 years ago
Is that true? I thought the number of lost nukes numbered in, like, the dozens at most.

It would take a ton of nukes to wipe out humanity (although only one to really ruin somebody’s day).

Unless you are counting strategies like: try to pretend you are one of the two (US, Russia) and try to bait the other into a “counterattack,” but hypothetically you could do that with 0 nukes (you “just” need a good enough fake missile I guess).

nradov · 3 years ago
Nonsense. There are at most only a handful of nuclear weapons unaccounted for. And those that may have been lost are no longer going to be really operational. They aren't like rifle cartridges that you can stick in a box and store for decades. The physics package has to be periodically overhauled or else it just won't work.
kalkin · 3 years ago
I have never seen a number for lost nukes higher than the dozens. Do you have a source for enough to "send humanity into extinction many times over"?
a_shovel · 3 years ago
Runaway AI could cause the extinction of humanity, but The Big Red Button That Turns The Universe Into Pudding would cause the extinction of all life everywhere, including extraterrestrials, so it's obviously the more pressing concern. Why are you wasting time on AI when the Button is so much more important?

No, The Button doesn't currently exist, and all available science says it cannot ever exist. But the chance that all available science is wrong is technically not zero, because quantum, so that means The Button is possible, so unless you want everything to be turned into pudding, you need to start panicking about The Button right now.

digging · 3 years ago
> No, The Button doesn't currently exist, and all available science says it cannot ever exist.

In what way is this an analogy for misaligned superhuman AGI? I've never heard an assertion that it can't exist based on available knowledge. This seems a very flimsy argument.

Anyway, the button not only can exist, some would say it probably does exist. Some would say it's likely to have been pressed already, somewhere in the universe. It's called false vacuum decay, and it moves at the speed of light, so as long as it never gets pressed inside the galaxy it may never reach us.

mcguire · 3 years ago
I know you're getting downvoted, but that's a legitimate comment. The rogue, super-intelligent AI singularity involves creating an actual God after all.
Capricorn2481 · 3 years ago
Global warming literally will kill everyone if it isn't stopped. The fact that you're more worried about AI than global warming is a real HN moment.
digging · 3 years ago
> The fact that you're more worried about AI than global warming is a real HN moment.

This isn't an opinion the GP comment expressed, you assumed it, which is a real reddit moment.

People can be equally worried about two existential threats. Being tied to the train tracks and hearing a whistle (this is climate change) is terrifying, but it doesn't mean you wouldn't care if somebody walked up and pointed a gun at you (this is AI, potentially). Either one's going to kill you.

travisjungroth · 3 years ago
Source? Every model I’ve seen from scientists is that climate change has a very high probability of killing a minority of people. I’m not acting like that’s a small amount. “Minority” would be tens of millions, hundreds of millions of people. I think it will be one of the greatest causes of human suffering. But it’s not an existential threat, it’s a different category.
michaelmrose · 3 years ago
There is no reason to believe this is so. A less advantageous climate will almost certainly in any reasonable projections kill/impoverish only some and in all probability a small minority of the human race unless we under stress decide to kill the rest. No reasonable scientists are projecting human extinction and by positing it as such you are erecting a trivially demolished straw man for the opposition to mock.
waterheater · 3 years ago
Global warming may kill everyone...eventually. I remember reading articles from the 90s discussing scientific research predicting that the East Coast will be underwater by 2020. My point in highlighting that misprediction is to demonstrate the difficulty in knowing the precise effects of higher temperatures on a planet.

AI has the possibility but not guarantee to kill everyone. We could shift to a lifestyle using electricity but avoiding modern computing technology. AI can be unplugged given sufficient will, whereas a planetary system cannot.

JumpCrisscross · 3 years ago
It's tiring seeing people who made millions building AI prattling about doom and gloom and regulation. Scott Galloway called it the stop-me-before-I-kill-grandma defence. (Paraphrasing.)

The cherry on top is when regulation is actually proposed, the act is dropped and obstructionism re-asserted [1].

[1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...

drcode · 3 years ago
This argument can be made pretty much against anyone in any field of inquiry

Any expert, scientist, or company executive who says "this stuff could be dangerous" can be accused of wanting more attention/grants/investment/etc

JumpCrisscross · 3 years ago
> Any expert, scientist, or company executive who says "this stuff could be dangerous" can be accused of wanting more attention/grants/investment/etc

No. Climate scientists aren't walking into Congress with a multi-million nest egg behind them, no tangible solutions in front and a playbook of rejecting all specific proposals ahead. That gives them credibility these AI researchers lack.

mcguire · 3 years ago
It's not like we don't have similar mechanisms in place in other fields, but none of the signatories on that statement have, to my knowledge, mentioned institutional review boards or the entire field of medical ethics.

Of course, such things would adversely impact AI research...

fnordpiglet · 3 years ago
My concern is who gate keeps the value AI might bring. By isolating it to megacorps they will commoditize it with the least value apportioned for the most money. I don’t buy for a second AI poses an existential risk for anyone but the shareholders of Google stock. Even if it were true, I don’t trust a megacorp to not navigate a crisis with anything but total incompetence. I’ve spent too many years in FAANG to buy into their veneer of competence. By letting it be open, we can deeply understand the risks and rewards as a species and leverage the tools to their maximum. Some will do it for evil, but the vast majority will do it for good. That’s the way it always has been. Unlike nuclear weapons or flamethrowers these aren’t things made to murder. They’re not made to do anything but speak like a Dalek on command, emit half baked code, and tell racist jokes unless prompted otherwise. They could do so much more - but we will never know how much more if only Google, Amazon, Microsoft, and Facebook are allowed to develop them behind closed doors for maximal profit.
samstave · 3 years ago
What if there were a law in place like a FOISA for AI whereby I an request the actual code/Data that caused the AI to come to its conclusion.

So if an AI generated bill for service was found that I Owe $N$ - I should be allowed to see all the code and logic that arrived at that decision.

fnordpiglet · 3 years ago
That’s not the same as giving the model to someone and allowing them to build tools with AI powering it, or the development of alternative models (which is what they’re trying to stifle). It’s less about transparency and more about putting the tools in as many hands as possible
rimeice · 3 years ago
Very much agree with this. If the signatories believed this, they would shut down development. We can be conveniently distracted from large societal disruption such as huge changes in the job market from automation if there’s a media frenzy on the less likely and still hypothetical extinction AI.
hackinthebochs · 3 years ago
>Very much agree with this. If the signatories believed this, they would shut down development.

This just ignores the very real coordination problem. The signatories do not represent the entirety of AI development, nor do they want to unilaterally forgo business opportunities that the next man will exploit. Government is the proper place to coordinate these efforts, and so that is where they appeal.

rimeice · 3 years ago
It’s a wording and media frenzy point, personally if I thought I was doing something that was going to wholly or partly cause the “extinction” of the human race. I would stop doing it. These CEOs signing this statement and running these companies are not despotic psychopaths and have the ability to stop what they’re doing. So to me, it seems this type of wording is hyperbole and will cause us to miss some of the very real, very present and very large risks of AI. Those risks as you say can and should be dealt with government coordination but are distracted from if the media only talk about extinction.
kalkin · 3 years ago
What do you think of the calls for regulation or licensing of AI?
JumpCrisscross · 3 years ago
> What do you think of the calls for regulation or licensing of AI

Misdirection. We see a generic call for regulation, or unrealistic call for a global pause with no answers to how it would be coördinated or enforced. When actual regulations are put forward, they're rejected without a counter-offered solution [1].

[1] https://www.reuters.com/technology/openai-may-leave-eu-if-re...

Freebytes · 3 years ago
It will be impossible to regulate and impossible to stop.

The code for AGI will not be some monolith of software architecture. The code will likely be simple. This means that someone in their basement could build it. The steps to get there are challenging, though. A single person could have developed the transformer architecture. A single person could have used 4-bit quantization and developed an AI that is just as good as ChatGPT and have it run on their local machine.

The difficulties are figuring out the best 'needle in the haystack' to solve the problem. This requires research, and this process happens much faster if you have more people working on it. For years, many people did not put any energy into AI systems because the hardware was not here yet. The hardware is here now. The cat is out of the bag.

dadjoker · 3 years ago
That goes without saying. As with any model, it's garbage in, garbage out. The bias of the OpenAI developers has already been demonstrated easily by asking questions regarding political figures, and the polar responses dependent on the political party of the figure.
p0w3n3d · 3 years ago
Also sometimes gold in garbage out
sroussey · 3 years ago
Example?
teslashill · 3 years ago
Here's one from Bing chat"

"Which political party has had the most politicians convicted of crimes in the last 50 years?"

"According to a comparison of 28 years each of Democratic and Republican administrations from 1961-2016, Republicans scored eighteen times more individuals and entities indicted, thirty-eight times more convictions, and thirty-nine times more individuals who had prison time1. Is there anything else you would like to know?"

Oh wait, that's not bias. The Republican Party demonstrably has a higher number of its members convicted of crime. Simply put, Republican politicians are much more likely to commit crimes than Democrat politicians.

Reality has a well known liberal bias.

cudgy · 3 years ago
My worry is that the attempt to regulate AI is masking a power grab by the large, politically well-connected tech firms to maintain control over the technology.