Dead Comment
Robert Miles' videos are among the best presented arguments about specific points in this list, primary on the alignment side rather than the capabilities side, that I have seen for casual introduction.
Eg. this one on instrumental convergence: https://youtube.com/watch?v=ZeecOKBus3Q
Eg. this introduction to the topic: https://youtube.com/watch?v=pYXy-A4siMw
He also has the community-led AI Safety FAQ, https://aisafety.info, which gives brief answers to common questions.
If you have specific questions I might be able to point to a more specific argument at a higher level of depth.
Some of these goals are ones which we really would rather a misaligned super-intelligent agent not to have. For example:
- self-improvement;
- acquisition of resources;
- acquisition of power;
- avoiding being switched off;
- avoiding having one's terminal goals changed.
They can’t point to an existing system that poses existential risk, because it doesn’t exist. They can’t point to a clear architecture for such a system, because we don’t know how to build it.
So again, what can be refuted?
Without that, it pretty much looks like a list of invites to a VIP club...
risk of extinction due to AI? people have been reading too much science fiction. I would love to hear a plausible story of how AI will lead to human extinction that wouldn't happen with traditional non-AI tech. for the sake of conversation let's say non-AI tech is any broadly usable consumer technology before Jan 1 of 2020.
I’ve never seen such destructive curiosity. The desire to make cool new toys (and yes, money) is enough for them to risk everything.
If you work on AI: maybe just … stop?
Mitigating the risk of extinction from climate change should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The fantasy of extinction risk from "AI" should not be placed alongside real, "societal scale" risks as the ones above.
Well. The ones above.
Just because it has not come to pass yet does not mean they were wrong. We have come close to nuclear annihilation several times. We may yet, with or without AI.