I keep hearing this but have yet to find a good resource to study the issues. Most of what I've read so far falls into two buckets:
"It'll hijack our minds via Social Media" - in which case Social Media is the original sin and the problem we should be dealing with, not AI.
or
"It'll make us obsolete" - I use the cutting edge AI, and it will not, not anytime soon. Even if it does, I don't want to be a lamplighter rioting, I want to have long moved on.
So what other good theories of safety can I read? Genuine question.
That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].
So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.
> "Social Media is the original sin"
In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.
> "I want to have long moved on."
Only if you have where to go. Others may not be that mobile or lucky. If autonomous trucks can make the trucking profession obsolete, it's questionable how quickly can truckers "move on".
[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.
[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.
We are going to be hearing that argument a lot as the AI police state evolves
A great quote from an otherwise OK movie ("Anon").