Covid era politicization and the fallout from that has a lot to do with it as well.
The US only left in 2020 and then rejoined in 2021, I suppose that’s why I didn’t remember it as a big thing.
The US was also just paying ~15%. It was the biggest governmental funder, with Germany at ~9% as the second. But the WHO is apparently mostly funded by charity donations, the Bill & Melinda Gates Foundation was paying ~5% for instance.
(it’s awkward to list sources on the phone but should be easy to verify)
I do get the sentiment though from the perspective of the US, I don’t mean to argue your points.
One of the differences in risk here would be that I think you got some legal protection if your human assistant misuse it, or it gets stolen. But, with the OpenClaw bot, I am unsure if any insurance or bank will side with you if the bot drained your account.
These disincentives are built upon the fact that humans have physical necessities they need to cover for survival, and they enjoy having those well fulfilled and not worrying about them. Humans also very much like to be free, dislike pain, and want to have a good reputation with the people around them.
It is exceedingly hard to pose similar threats to a being that doesn’t care about any of that.
Although, to be fair, we also have other soft but strong means to make it unlikely that an AI will behave badly in practice. These methods are fragile but are getting better quickly.
In either case it is really hard to eliminate the possibility of harm, but you can make it unlikely and predictable enough to establish trust.