The intersection of the two seems to be quite hard to find.
At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.
This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.
If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.
Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.
- Superintelligence poses an existential threat to humanity
- Predicting the future is famously difficult
- Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence
- Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.
Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.
But increasingly, there is no correlation between age and specific developmental stages. It is merely a number that indicates how long something has been around but not what its current status may be.
Of course, the big question is what to do if/when they're smart enough to fool everybody.
Honestly a really simple game design fix for this would be to unlock tasks more slowly as the player demonstrates more engagement with the system. That way if you are like me and mostly find them boring and repetitive, you don't feel as bad about not getting them done.