Can anyone make a compelling argument that any of these AI companies have the public's best interest in mind (alignment/superalignment)?
Can anyone make a compelling argument that any of these AI companies have the public's best interest in mind (alignment/superalignment)?
I don't want artificial buddies, or servants or whatever except maybe in video games.
There was an advertisement on Twitter a few years ago for Google Home. It was a video where a parent was putting their child to bed, and they said, "Ok Google, read Goodnight Moon."
It felt like a window into a viscerally dystopian future where we outsource human interaction to an AI.
This seems like an obvious problem to me. Despite some progress with adding nuts and salmon.
Edit: I am in no way saying conservatism is bad and liberalism is good. I have my values in both.
- Information bubbles (this is the top issue, and it's really incredibly persuasive)
- Geographic location and social environment
- Lack of time to deeply evaluate truth vs noise and consider multiple sides of an issue
- Conviction of values - how much does a person believe their values are tied to the political view (leads to subtly drawing emotional conclusions and implicitly trusting a political party)
- Belief that due to one's own intelligence, one is not subject to propaganda (a clearly false belief that many smart people fall into)
Deep emotional awareness is not as strongly related to intelligence as people think.