Readit News logoReadit News
1vuio0pswjnm7 · 2 years ago
Always all-or-nothing thinking from these folks. Like what they are working on can never be just another boring thing that nerds find entertaining. No, it has to be "world-changing". Gonna "change the world" (for the better or the worse?) while sitting behind a keyboard. Except they do not know how to write. Overlook the important details and exaggerate, communicating in hyperbolic, know-it-all nerd gibberish.

(Grants do not require repayment.)

ninetyninenine · 2 years ago
What about the folks that say everything will work out in a fair and balanced way as if the universe and everything in reality stays perfectly balanced on a tip of a pin?

You paint the picture as if the All or nothing folks are extreme when in reality the extreme is more likely then some perfectly fair and balanced equilibrium.

In nature things tend to overload or fizzle out or stay in equilibrium. Equilibrium, though possible, is the rarer outcome. Mind you it's not an impossible outcome but given the way entropy works, it is the rarer outcome.

Humanity itself is an example of this rare outcome. Usually molecules don't self assemble into replicating machines, they either freeze into inanimate rock or overload into fusion producing stars.

As for AI. I think it will either change the world, or amount to nothing. The former seems more likely. Some strange middle ground where the ai technology never improves to some point of a societal paradigm shift seems unlikely. chatGPT and sora only makes me ask what is the trend line predicting next?

living_room_pc · 2 years ago
Its the AI hype train. There is so much money flowing around this topic right now, everyone wants their share of the pie. I would guess that anyone writing articles on the topic has some stake in AI as well either as an investor or a employee.

What I find is interesting, is that the negative news regarding AI safety is adding to the hype as well since it seems to capture a lot of attention.

arisAlexis · 2 years ago
If you are calling it a hype I'm sure you are not using the models everyday in your work or otherwise. Then you would understand
timeagain · 2 years ago
Everyone is courting VC dollars. Think of these claims as grant applications and everything makes more sense.
1vuio0pswjnm7 · 2 years ago
They cannot live in the present. Need to constantly keep (incorrectly) predicting the future.
1vuio0pswjnm7 · 2 years ago
HN commenters frequently resort to describing the choices of software available to internet users in terms of "winning" or "won". This is more "all-or-nothing" thinking. People sometimes write software for the enjoyment of it, or to satisfy personal needs. That is, for non-commercial purposes. Sometimes this software becomes popular, sometimes it does not. In either case, the software persists; it remains available. It does not have to "change the world" in order to be useful. If it is non-commercial, it does not "win" or "lose", except in the minds of HN commenters who can only think in all-or-nothing terms. In truth, it simply exists as an option for all internet users.

It is possible that "AI" might not be as world-changing as its proponents are claiming. However, if it is free and open-source, and non-commercial, it may still persist and remain useful, regardless of whether it becomes popular or not.

ecoquant · 2 years ago
IMO we underestimate the psychological implications of wealth in modern society.

I suspect even if a power ball lottery winner takes a philosophical position it would be taken far more seriously than if they had never won the lottery.

At some point you start to believe your own bullshit when all of society signals are telling you what a genius you are.

phatfish · 2 years ago
Being rich brings power not intelligence, unfortunately.
sberens · 2 years ago
I'd like to remind people of this single-sentence statement:

> Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. [0]

Signed by Demis Hassabis, Sam Altman, and Bill Gates among others

[0] https://www.safe.ai/work/statement-on-ai-risk

antod · 2 years ago
Part of me wonders if these people are intentionally framing the debate around ethics and potential risks as longer term extinction level problems to distract from the nearer term damage caused by them accelerating the economic inequality of the AI have-nots while they make themselves even richer.
cyrialize · 2 years ago
I believe you may be alluding to longtermism[0]. At it's face value, longtermism seems like a good thing, but I've heard many criticisms against it - mainly levied against the billionaire class.

And the criticisms mostly center on what you're saying here - how many billionaires are focusing on fixing problems that are very far off in the future, while ignoring how their actions affect those of the very near future.

This is really less of a criticism of longtermism, and more of a criticism of how billionaires utilize longtermism.

Is it important that we find another planet to live on? Sure, but many will argue that we should be taking steps now to save our current planet.

[0]: https://en.wikipedia.org/wiki/Longtermism

hyperadvanced · 2 years ago
The more I look at AI, the more I get the feeling that this is true. Spinning an intriguing sci-fi tale of apocalypse and extinction is relatively easy and serves to obfuscate any nearer-term concerns about AI behind a hypothetical that sucks the air out of the room.

That said, I don’t think that it’s necessarily disingenuous so much as it is myopic - to them of course AI is exciting, world-changing, and profitable, but they (willfully or not) fail to see the downsides or upsides for anyone else but them. Perhaps in the minds of the ultra-rich AI proponents, solutions to nearer-term effects of their tech are someone else’s problem, but the “existential risks” are “everyone’s” problem.

statuslover9000 · 2 years ago
The short-term effect is a harbinger of the long-term risk, since capitalism doesn’t inherently care for people who don’t provide economic value. Once superintelligent AI arises, none of us will have value within this system. Even the largest current capital holders will have a hard time holding on to it with an enormous intelligence disadvantage. The logical endpoint is the subjugation or elimination of our species, unless we find a new economic system with human value at its core.
theptip · 2 years ago
No, they are not. Pretty much everyone in the x-risk community also recognizes the existence of short-term mundane harms as well. The community has been making these predictions for over a decade, long before it was anything other than crazy talk to most people.

Google has a big investment in reducing AI bias (remember Gemini got slammed for being “too woke”). Altman is a big proponent of UBI. Etc.

preommr · 2 years ago
I am 90% sure that Altman's views on such issues are a machiavelian attempt at regulatory capture.
Clubber · 2 years ago
This; Gates too. It's becoming an obvious attempt to garner support of the government restricting the use of AI to large players. None of the entrenched interests want any disruption that AI might cause what so ever.

Replace "AI" in all the doomsaying with "the internet," and it will become clearer.

randomcarbloke · 2 years ago
the fear-mongers are almost always either ideologues (academic or otherwise), or angling at regulatory capture.
i5heu · 2 years ago
I’d like to remind people that these ppl have no more knowledge about AGI as anyone else on this planet since there is no knowledge yet and everything they say about this topic is as relevant as something every other random person can say.
classified · 2 years ago
They are just banking on their putative status as influencers to grab more influence.
reducesuffering · 2 years ago
Yes, let's go with the random layperson knowledge of an HN commenter compared to the people smart enough to actually build all the AGI tech. 50/50 coin toss, I'm sure.

Dario Amodei (Anthropic CEO, builder of Claude 3 Opus): "My chance that something goes, you know, really catastrophically wrong on the scale of human civilization, might be between 10 - 25%"

Dead Comment

tivert · 2 years ago
So, is this extinction or not?

> We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like. - Sam Altman (https://blog.samaltman.com/the-merge)

I would say it is.

Balgair · 2 years ago
I love how, per the quote, he thinks we're anywhere close to being able to merge with AI.

I'm a neuroscientist and, man alive, we're no where close to being able to merge with machines. Like, do you have any idea how many diseases we could eradicate if we could modify neurons like that? Like, for real, 'curing death' would be step 9 or 10 on that 1000 step journey.

I hope you see how terribly uninformed such a take is then.

bamboozled · 2 years ago
Sam Altman really seems unhinged, or he just says this stuff to be edgy...or both.
reducesuffering · 2 years ago
Even worse, some factions literally advocate for killing all humans in the pursuit of a synthetic intelligence, and YC's Garry Tan is advocating for these people!

Beff Jezos (e/acc founder): "e/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism

    Parts of e/acc (e.g. Beff) consider ourselves post-humanists; in order to spread to the stars, the light of consciousness/intelligence will have to be transduced to non-biological substrates"
https://beff.substack.com/p/notes-on-eacc-principles-and-ten...

serf · 2 years ago
given who the people are that signed it really just comes off more like an attempt at creating a regulatory moat around the territory they got to first.

"This is extinction level important, so all you people who aren't us need to be careful meddling with the stuff we're meddling with for profit."

mistermann · 2 years ago
Mitigating the risk of unnecessary global death due to the curious suboptimal manner in which humans have "decided" ("democratically", dontcha know) to distribute wealth on the other hand, nothing to see here!
drooby · 2 years ago
AGI escaping a sandbox is truly terrifying. There will be a subgroup of the population that will worship it and work for it. It's not so much AGI that scares me - it's the humans I'm scared of.
Animats · 2 years ago
> AGI escaping a sandbox is truly terrifying.

That's already happened. The AGI's are called corporations. Most governments have failed to regulate corporations successfully, and have been unable to keep them from becoming almost powerful enough to challenge governments.

The accelerationists for corporations were a group of economists, led by Milton Friedman, and a group of business leaders, organized by the U.S. Chamber of Commerce. In the 1970s and 1980s, they pushed the ideas that corporations are responsible only to their stockholders, and that government should not interfere with the concentration of corporate power. Those were not mainstream ideas of the 1930s to 1960s. The corporate accelerationists succeeded. That's when corporations escaped the sandbox.

Now that's an "alignment problem".

mitthrowaway2 · 2 years ago
Corporations are indeed a difficult alignment problem, as anyone who has tried to design an incentive scheme eventually realizes. But corporations are still composed of humans, beholden (if only loosely) to human values, limited (if loosely) by human intelligence, and are not very well-coordinated decisionmakers with coherent goals.
hacker_88 · 2 years ago
It is all Anarchy at the top.
13years · 2 years ago
Potentially there are no good outcomes even if AGI remains under control. If we are actually able to create the type of AI entity they are trying to create, one that exponentially improves, the implications are beyond what most have thought about.

Something I recently wrote in depth about here. How we completely misunderstand the future that we think will happen.

"The implication is that everyone is enthusiastically racing towards a destination that does not exist. The capability to make the things you want will ironically be the same capability that makes them unattainable. This is not a scenario that arises from some type of AI failure, but rather this is assuming AI performing exactly as intended."

https://www.mindprison.cc/p/the-technological-acceleration-p...

tivert · 2 years ago
> https://www.mindprison.cc/p/the-technological-acceleration-p...

That is a very good and insightful article. One of the biggest issues I have with AI enthusiasts is that they talk about creating these incredible capabilities ... then they imagine the resulting world with them will look like some hackneyed sci-fi work, like Star Wars, whose universe seems like it could be kinda cool to live in until you actually think about it and realize it's all fridge logic and nothing about it actually makes sense (e.g. WWII-style dogfights in the vacuum of space).

The only issue with the article is sometimes it's too optimistic, e.g.

> No one will care about you or what you create as there is nothing you can offer they can’t simply wish for themselves. Life is all about you and the machine, nothing else. We land in a world dominated by techno-dystopian narcissism.

> ...

> Whatever may be the current direction of society under the present technological capabilities, we can only assume AI will accelerate society towards the path we currently are traveling which is toward a techno dystopia of a populace mesmerized by shiny glimmering lights.

I kind of find it hard to believe there would be any stability for any people suck in that state, I think they'd instead be "cleaned up" and disposed of in relatively short order. Those people would be utterly useless to whatever still has power in society, so why service and protect them indefinitely?

Dead Comment

ewhanley · 2 years ago
Mrs. Davis[0] offers a pretty compelling vision of this - absurd but not wholly unbelievable. The machine doesn’t necessarily have to supply the physical threat if it has humans who happily do its bidding.

[0] https://en.m.wikipedia.org/wiki/Mrs._Davis

HKH2 · 2 years ago
The hardest part is the AGI. People could easily sell their souls to the AGI devil to get knowledge and power, since if it truly is AGI, it should be able to make ideas compelling.
simonw · 2 years ago
That show was so good. Honestly worth watching the whole thing just for the moment in the last episode where the origin story for the AI in the series is revealed. If you've ever been an optimistic young software engineer you will feel SEEN.
ActorNightly · 2 years ago
Look at China.

You have arguably more STEM talent in China through a much more rigorous selection process, as well as a lot more centralized direction under a more authoritarian government, with a large amount of funding.

Yet China hasn't managed to take over the world. You would think that they if they wanted to become the worlds primary economic power, and they could throw enough compute at the problem (like an AGI would), they would have figured out how to do it by now.

gcr · 2 years ago
I think this point is underrated, but I’d press further. How do we know this has not already happened? Agencies do not need to be born of gradient descent and certainly do not need to look like matrix multiplications to be deemed “artificially intelligent.” Religion is one such system for example, as are cults, addictions, brands, war strategies, organizing campaigns, governments, or (borrowing from chaos magick) any egregore or societal belief.

Ted Chiang has a lovely post about similar ideas, hitting a little closer to home. His thesis AIUI is that unrestrained capitalism, here meaning the unfettered desire to maximize profit for shareholders, is an artificial value system that causes its agents (corporations) to exhibit intelligent goal-seeking behavior. These systems are certainly “alive” — corporations respond to stimuli, sense their environment, “reproduce,” enact changes in response to predictions of future state, etc. Though each of these agents (corporations) are made of collective human behavior, their actions taken together can be considered a form of artificial intelligence that stretches “beyond” (in a wisdom-of-crowds sense) human understanding. In this sense, an artificial intelligence has already “escaped” and has fervent followers.

I know these are strained analogies, but it’s fun to think about. I feel that in the future, the work of solving “AGI safety” will become indistinguishable from the work of other societal problems — how do we prevent tyrants from taking over governments, and how do we make existing governments more resistant to that failure mode? How do we ensure that generating value isn’t a prerequisite for human survival? How can we more efficiently distribute our resources and reduce wealth inequality? How can we ensure that all kinds of life can thrive, not just the most optimal kinds?

If AGI reflects humanity’s best and worst impulses, and I believe it does because that’s all we train it to do, then having good societal answers to these distinctly “human” questions will also help our society resist malevolent AGI. It’s only human, after all.

stana · 2 years ago
Talk that introduced me to this idea of corporations as Paperclip Maximiser AI - "Dude you broke the Future" https://www.youtube.com/watch?v=RmIgJ64z6Y4
clooper · 2 years ago
Many institutions are collective/artificial intelligence because as you said these institutions have objectives and take actions to impose their will on the world in order to achieve their goals.
Animats · 2 years ago
"Ted Chiang ... unrestrained capitalism, here meaning the unfettered desire to maximize profit for shareholders, is an artificial value system.

Well, yes. I've been saying that for years, and others have been saying it since the late 1970s. It's been a regular theme in Mother Jones for decades. The comment I make now and then is that capitalism has a monopoly. Communism went bust. It's not that communism worked very well. It's that, for a while, it was a serious competitor and capitalism had to keep its product quality up to compete. Now the pressure to provide a better life for all citizens is off. It shows.

andsoitis · 2 years ago
Even if a terrifying AGI rises up and escapes our control, I am still eternally grateful that Prometheus stole fire from the gods and gave it to us in the form of technology, knowledge, really civilization.
ehnto · 2 years ago
So that we can record our demise in high definition? Technology is amazing, and we have our civilization to be thankful for. But there are definitely technologies that in retrospect, if we had dodged them on the tech tree, we would be better off.

Life would have continued just fine without many technologies, and I think AGI is one of them. We do not need AGI to survive and grow, and therefore any risk it imposes should be treated from that lens. If the choice is civil collapse, or no AGI, we should pick no AGI. I don't think we would though.

mitthrowaway2 · 2 years ago
I'm much more scared of the AGI, because even the most inhuman human is still human, and I can have sympathy for them.
drdaeman · 2 years ago
Won't any AGI made by humans unavoidably have human origins, and thus, simply be a [post-]human? I don't see a reason I should be scared of a non-biological but generally human-like intelligence - not on a general principle. If anything, in general, I'd be happy that someone is finally getting real good chances of conquering death.

I possibly would fear genuine non-human intelligence (on a principle of fearing what we don't understand), but given that chances of one evolving under our nose, on its own, entirely within its own world, without the connection for ours, then emerging, are quite improbable... Unless real aliens arrive - but let's avoid spoilers for 2025 this early.

What should I fear, I think, is hostile intelligence, but then I'm not sure I should discriminate on its origin. I mean, there are some very real humans out there who can try to end this world (in one way or another), so I'm probably kind of desensitized and isn't exactly afraid of some very theoretical AGI threats.

bamboozled · 2 years ago
How do you know part of being self aware and intelligent also means compassion is part of the package ?

Deleted Comment

kouru225 · 2 years ago
Sounds kinda like your problem ngl
bamboozled · 2 years ago
Isn’t ChatGPT et all already a general intelligence that’s escaped the sandbox ?
jprete · 2 years ago
No. It lacks the capability to act independently of human requests.
nokcha · 2 years ago
ChatGPT is arguably a general intelligence, but it hasn't escaped AFAIK -- it's still contained within OpenAI infrastructure, and OpenAI can easily pull the plug on it.
p1esk · 2 years ago
Yes, but it’s heavily constrained in what it’s allowed to do through ‘alignment’, and it’s not that smart. If someone releases a pre-alignment version of GPT-5 into the wild, that would be scary.
mitthrowaway2 · 2 years ago
It hasn't escaped the sandbox because it hasn't figured out how to walk, but the dev teams have raced to open the door as widely as possible and unbolt the hinges. It's kind of funny to look back at the early 2000s when people were saying "of course, they'll know to unplug it when it asks for an internet connection or tries to execute arbitrary code"; in reality the first and second things we did were add an API for internet access and a python plugin.
matteoraso · 2 years ago
I've always wondered why people think that an AI that's super-intelligent will also be evil. It could just as likely end up being very kind (more than likely, actually, because the programmers would have safeguard to ensure that it's nice).
thom · 2 years ago
There are extremely high capability entities (people, companies, governments) that aren’t comically evil on the surface but nevertheless immiserate large groups of humans. Not all of them, not all of the time, but not none of them. What is your plan that no AI ever gains enough power to harm significant numbers of people either on purpose or by accident, just once? What safeguards do you envision that can’t be ignored, or subverted, or misinterpreted, just once?
clooper · 2 years ago
Controlling people with violence and fear are evolutionary adaptations for ultra violent social primates. AI has no such pressures imposed on it so there is no reason to expect true AI will have the same drives as ultra violent predators like humans.
richardw · 2 years ago
The safety conscious amongst us don't think "it will be evil", they think "what does it take to be absolutely sure that no bad outcomes can happen" in the same way we build bridges, cars, planes, firewalls, new medicines.

The burden of proof is typically on those that introduce a new medicine, not the FDA. AI will be riskier than medicines because medicines can't think.

Personally I don't want us all hiding in fear and not trying anything, but I do think that we're either going to walk into this thing with a hacker mentality, or an engineer mentality. The former is great for moving fast and breaking things, but the latter is safer when you're playing with a one-way "this changes everything forever" technology.

goatlover · 2 years ago
I think most doomer arguments are not that the AI would be evil, but rather that it would be misaligned with human interests, and would seek to accomplish goals with that misalignment, which could be bad for us. Evil AI is a bit too anthropomorphic.

It's more like powerful AIs that just don't share our values, because we didn't bother to figure that part out. But yet we still give them goals, blissful to the possibility they will find dangerous solutions to accomplishing those goals.

kromem · 2 years ago
Yeah, the dreamt up scenarios tend to showcase very stupid superintelligences.

"But what if there's something smarter than humans that is tasked with making paperclips until it destroys the Earth?" the people cried, as their corporations continued to produce and produce until it was well past the warned thresholds of destroying the Earth.

"But what if it decides to nuke humanity?" they cry, as increasingly elderly and unhinged dictators elsewhere arm up their nuclear arsenals.

It's like we can't fathom what great intelligence or wisdom will actually look like, so we just project many of the stupidest aspects of ourselves onto an entity simply imagined as more capable of enacting the dumbest aspects of humanity.

I fear a more automated humanity.

I do not fear automation more intelligent and wise than humanity.

kouru225 · 2 years ago
This 100%. Psychoanalysis would have us believe that morality and strategy are basically the same thing: we think we have these moral codes, but in actuality what we have is an internalized version of our parents. As someone who’s almost the age of his parents when they had him, I know by now that most of what i was taught as a moral code (don’t eat too much candy, do your homework, don’t hit people) was actually just a strategy that was too long-term for young, infant me to genuinely understand: Don’t eat too much candy because it’s important to be healthy, Do your homework for good education and social recognition, don’t hit people because no one will like you if you do and that will cause all kinds of problems.

The modern media would have us believe that strategy and morality are opposed to each other, but seems to me like that’s just the exact opposite of the truth

fwlr · 2 years ago
It’s not so much that there’s a little morality tag that randomly gets assigned the value of “nice” or “evil”, it’s more like there’s 1000 possible programs we would consider “super intelligent” and maybe it’s the case that 950 of them would reshape the world in a way we wouldn’t like - and when a powerful entity reshapes the world in a way we don’t like, we call that entity “evil”.

^ This is the basic reasoning behind the common view that super intelligent AI will be, by default, evil

Dead Comment

hackerlight · 2 years ago
It's misaligned, not evil. AI doom could involve benevolent intentions (control or kill for humanity's own good), it could also involve ambivalent intentions (make more paperclips).
recursivecaveat · 2 years ago
I would say its not a super fruitful area of speculation because it doesn't really matter too much. If you consider wholesale destruction of humanity to be on the table, a coinflip or even a 90% chance that its friendly is not super comforting. Its kindof like relying on the UK's "letters of last resort" or the conscience of individual nuclear weapons operators when considering the likelihood of a MAD scenario. You're also already involving so many speculative sources of uncertainty, what's another either way? Reasonable people already disagree by orders of magnitude.
at_a_remove · 2 years ago
I'm not evil, I just really need to make these paperclips, you see, and I could probably repurpose your atoms for the Hypnodrones.

Good and evil are irrelevant. If it is extremely capable and its goals conflict with ours, conflict will occur. This does not require evil, just disagreement. And in the disagreement of desires between you and the hamster, who wins?

Dalewyn · 2 years ago
>I've always wondered why people think that an AI that's super-intelligent will also be evil.

It's chiefly a western (American/European) concept from what I can tell, it's not shared by other cultures and some like Japanese go the other way (eg: Doraemon).

drcode · 2 years ago
there's very few ways to be good and many ways to be evil
mitthrowaway2 · 2 years ago
"How do you know I won't roll a 100 on a d100? Either I will or I won't, so it's just as likely that I will"
theptip · 2 years ago
The kind AIs will just sit and meditate, or organize your calendar. The unkind AIs will seek power, and in the limit will tend to dominate. There is no stable attractor around the “be kind” strategy.

(Also remember that a smarter-than-you AI could easily pretend to be kind while also subtly trying to gain compute. How would you tell the difference?)

How do you propose to build a “be nice” safeguard? Nobody has a clue how to achieve such a thing right now.

toomuchdocs32 · 2 years ago
Long time ago I was reading translated accounts of Rwandan Hutu's that had participated in the 1993 genocide. One in particular had stuck with me; one account of a man that had murdered his childhood friend. As he described it, standing there having gutting and dismembers the man he had grown up side by side with, he had a sense of exhilaration. With all the wealth he thought of the wealth he had now, a tin roof, cattle, all those things he could take from his dead friend... he realized he didn't need God.

And then he went to bed, like millions of others, proud of what he had done. Proud of fighting off an unarmed, defenseless 'cockroach' whom just months ago he called brother. What he had done wasn't evil. It was only later that the regret came. For the longest time I wondered how someone could get down to that level of hate.

And then it happened to me.

I commuted by train and occasionally there's collisions. There was one that late night, 11:00pm or so. I was exhausted, hungry, and just wanted to go home when we were told that we would have to board a shuttle bus due to a collision on the track and that just made my dark mood all the more worse.

The busses take us along side the track and in the dim darkness I could see the flashing lights of EMS and police. And the covered chunks of what was left after a person is hit by a train going 30 mph. And you know what?

It delighted me. Here was this man that had just died, but he had made me some minutes late and I genuinely felt that was exactly what he deserved, that his death was karmic justice for causing inconvenience to me. And I imagined his wife's world being destroyed when she learned of her beloved partner's death. And I imagined her falling apart and being unable to raise her children, leading them also to a path of complete self destruction, and her choking on all the despair. And it made me happy. The happiest I had been the whole week, because in my mind that was exactly what they all deserved for the unforgivable sin of making me a little bit late.

Then I went home, went to bed. And didn't think about it again for years.

Does it make me an evil person? And there, in trying to answer, lies the problem. Because a part of me says, no I'm not because it was just a fleeting moment of thought. But if I can justify that, then who exactly goes through to bed twirling their mustaches and count themselves among the forces of evil doers?

Did anyone that that goaded a suicidal Shaun Dykes, a 17 old boy, to jump down to his death think themselves evil?

Did the men of the Khmer Rogue believe themselves to be evil as they dragged their countrymen to be murdered in the killing fields?

Did the Imperial Japanese soldiers of Unit 731 view themselves to be evil as they vivisected people alive and awake in the name of science?

I don't know. I can't even say for certainty whether I am evil or not. I just know that I can make any of a million and one excuses to justify anything.

And that leads me to wonder how many excuses an AGI can come up with.

_y5hn · 2 years ago
Compassion is like a muscle that can be learned to stretch. However, the AI alignment problem is parallel to the human-alignment problem.
ifwinterco · 2 years ago
To be truly evil you have to believe you're doing the right thing
PoignardAzur · 2 years ago
No kidding, I take the Paris metro and every time there's a "this line is delayed because a person fell/jumped on the tracks" notice, people around have the exact same sociopathic reaction. It's uncanny.

Not much to do with AI x-risk though.

cyrialize · 2 years ago
I like to fantasize about a super AI that does something for the working class people of the world.

Like, imagine one day that an AI just took money from a ton of different corporations and billionaires and redistributed it amongst everyone else. People will argue against the AI, and it'll just respond back with research done on how UBI improved people's mental health and well being.

One can dream!

darkest_ruby · 2 years ago
AI is meant to be clever, AI will not do such stupid thing. Free money given to people who just spend it on unnecessary stuff will only cause more inflation. and the money will return the big corp anyway, just like it did during pandemic helicopter money times.
artemisyna · 2 years ago
It's fun seeing how much threads like this are mostly folks' hot takes on the title as opposed to anything in specific to do with the article.
kromem · 2 years ago
Everyone is crying the sky is falling because AI is going to degrade the quality of online discourse.

Meanwhile I'm sitting back waiting to finally see intelligent debates again about nuances in the articles between LLMs that are the only ones who will bother to read the actual article and cite it.

I have a feeling that in around 2-3 years I'll want a site like Reddit or HN but where it's the humans who aren't allowed to comment and bring down the quality of discourse.

throwawaaarrgh · 2 years ago
The intelligence level of the average HN comment is lower than your average chat bot
HankB99 · 2 years ago
If I let my imagination run wild, I can imagine that at some point AI becomes in some way sentient. By that I mean it gains reasoning, some sort of understanding, and motivation.

I wonder what the possibility is that this AI will decide that a pitched battle is going to waste resources and risk humans pulling the plug. What if it understood that and instead operated so subtly that it was not obvious it was controlling The World.

Hopefully it would not conclude that eliminating large swaths of the human population would be to its benefit.

clooper · 2 years ago
Lem wrote about this in Summa Technologiae. He called the field Intellectronics and explored what would happen once machine/electronic intelligence surpassed human intelligence: https://en.m.wikipedia.org/wiki/Summa_Technologiae
HankB99 · 2 years ago
Not surprising that I'm not the first to think of this.
timeagain · 2 years ago
Silly question. If an AGI did exist, what reason do we have to believe that it would act in its own best interest? Does an AGI need an ego?
at_a_remove · 2 years ago
Silly answer. There's going to be multiples and, if there's one takeaway you should get from The Conspiracy Against the Human Race, is that evolution will eventually get you something that "wants to live" because the lines that don't? Won't be around to compete.

Right now our little LLMs are passive, reactive. Prod them for results. But if the output of a prompt of some descendant ever includes a "give yourself your own tasks" subclause? We're off to the races.

p1esk · 2 years ago
If AI does ever become sentient, it will probably reason and decide about eliminating large swaths of human population in a similar fashion as we reason and decide about eliminating large swaths of ant population.
HankB99 · 2 years ago
Ants? That's how aliens might view us. A super AI might find value in minions.
bamboozled · 2 years ago
I’ve personally never ever considered killing large swathes of ants
Thuggery · 2 years ago
It seems to me nearly every story about robots that gain sentience in human storymaking has them eventually turning against their human creators. Even the word robot itself comes from a Czech play were men develop an artificial human and these "robots" then ... usurp and destroy their creators. Am I the only one that finds this interesting and odd?

I also suspect this narrative repetition is not totally unrelated to the current popularity of AI Doomerism.

theoreticalmal · 2 years ago
I think it’s indicative of human psychology. We realize we have the capability to deceive and destroy and be “evil”. We realize that, if we were to judge ourselves impartially, we would probably not come out un-condemned. If we can’t satisfy our own judgmental criteria, it’s doubtful an actual third party set of criteria would be satisfied either.
inference-lord · 2 years ago
I agree, I think we make all of this up because we don't know what to expect from the future, except we've learned quite a bit about our own motivations and drivers and so all we can do is look in the mirror and use whatever comes back at us.

Unfortunately we seem to have learned we're really not a very benevolent bunch of people because we've basically plundered the planet, hurt our fellow living creatures and so now all we can do is look at something called "AI" and expect it to the same.