Readit News logoReadit News
lxnn commented on US and UK refuse to sign AI safety declaration at summit   arstechnica.com/ai/2025/0... · Posted by u/miohtama
RajT88 · 6 months ago
A useful counterexample is all the people who predicted doomsday scenarios with the advent of nuclear weapons.

Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.

lxnn · 6 months ago
Note that we only got to observe outcomes in which we didn't die from nuclear annihilation. https://en.wikipedia.org/wiki/Anthropic_principle
lxnn commented on The first week of US v. Google – Defaults are everything and nobody likes Bing   theverge.com/2023/9/15/23... · Posted by u/mgreg
endisneigh · 2 years ago
Why do you hope they’re irrelevant? I hope they become better
lxnn · 2 years ago
I hope they are out-competed by a better alternative.

Dead Comment

lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
Veedrac · 2 years ago
The basic argument is trivial: it is plausible that future systems achieve superhuman capability; capable systems necessarily have instrumental goals; instrumental goals tend to converge; human preferences are unlikely to be preserved when other goals are heavily selected for unless intentionally preserved; we don't know how to make AI systems encode any complex preference robustly.

Robert Miles' videos are among the best presented arguments about specific points in this list, primary on the alignment side rather than the capabilities side, that I have seen for casual introduction.

Eg. this one on instrumental convergence: https://youtube.com/watch?v=ZeecOKBus3Q

Eg. this introduction to the topic: https://youtube.com/watch?v=pYXy-A4siMw

He also has the community-led AI Safety FAQ, https://aisafety.info, which gives brief answers to common questions.

If you have specific questions I might be able to point to a more specific argument at a higher level of depth.

lxnn · 2 years ago
Technically, I think it's not that instrumental goals tend to converge, but rather that there are instrumental goals which are common to many terminal goals, which are the so-called "convergent instrumental goals".

Some of these goals are ones which we really would rather a misaligned super-intelligent agent not to have. For example:

- self-improvement;

- acquisition of resources;

- acquisition of power;

- avoiding being switched off;

- avoiding having one's terminal goals changed.

lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
skepticATX · 2 years ago
But what argument is there to refute? It feels like Aquinas “proving” God’s existence by stating that it is self evident.

They can’t point to an existing system that poses existential risk, because it doesn’t exist. They can’t point to a clear architecture for such a system, because we don’t know how to build it.

So again, what can be refuted?

lxnn · 2 years ago
You can't take an empirical approach to existential risk as you don't get the opportunity to learn from your mistakes. You have to prospectively reason about it and plan for it.
lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
wiz21c · 2 years ago
It'd be so much more convincing if each of the signatories actually articulated why he/she sees a reisk in there.

Without that, it pretty much looks like a list of invites to a VIP club...

lxnn · 2 years ago
As the pre-amble to the statement says: they kept the statement limited and succinct as there may be disagreement between the signatories about the exact nature of the risk and what to do about it.
lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
endisneigh · 2 years ago
> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

risk of extinction due to AI? people have been reading too much science fiction. I would love to hear a plausible story of how AI will lead to human extinction that wouldn't happen with traditional non-AI tech. for the sake of conversation let's say non-AI tech is any broadly usable consumer technology before Jan 1 of 2020.

lxnn · 2 years ago
The emergence of something significantly more intelligent than us whose goal are not perfectly aligned with ours poses a pretty clear existential risk. See, for example, the thousands of species made extinct by humans.
lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
habosa · 2 years ago
But yet, it’s full steam ahead. Many if not all of the signatories are going to do their part to advance AI even as they truly believe it may destroy us.

I’ve never seen such destructive curiosity. The desire to make cool new toys (and yes, money) is enough for them to risk everything.

If you work on AI: maybe just … stop?

lxnn · 2 years ago
The problem is that's unilateralism.
lxnn commented on Statement on AI Risk   safe.ai/statement-on-ai-r... · Posted by u/zone411
YeGoblynQueenne · 2 years ago
The statement should read:

Mitigating the risk of extinction from climate change should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The fantasy of extinction risk from "AI" should not be placed alongside real, "societal scale" risks as the ones above.

Well. The ones above.

lxnn · 2 years ago
Why are you so confident in calling existential AI risk fantasy?

u/lxnn

KarmaCake day51March 1, 2020View Original