If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?
I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.
Growing tomatoes is less efficient than buying them, regardless of your metric. If you just want really cleanly grown tomatoes, you can buy those. If you want cheap tomatoes, you can buy those. If you want big tomatoes, you can buy those.
And yet individual people still grow tomatoes. Zillions of them. Why? Because we are inherently over-evolved apes who like sweet juicy fruits. The key to being a successful human in the post-scarcity AI overlord age is to embrace your inner ape and just do what makes you happy, no matter how simple it is.
The real insight out of all this is that the above advice is also valid even if there are no AI overlords.
The philosophical problem that I see with the "AI overlord age" (although not directly related to AI) is that we'll then have the technology to change the inherent human desires you speak of, and at that point growing tomatoes just seems like a very inefficient way of satisfying a reward function that we can change to something simpler.
Maybe we wouldn't do it precisely because it'd dissolve the very notion of purpose? But it does feel to me like destroying (beating?) the game we're playing when there is no other game out there.
(Anyway, this is obviously a much better problem to face than weaponized use of a superintelligence!)