Readit News logoReadit News
nhod · 3 years ago
Actually, no, "only you" can't stop it. No one can.

There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.

We know how that went.

Like nuclear technology, AI technology development is in part an arms race. One huge difference is that AI's technological underpinnings are already far more widely available and understood than any nuclear technology ever was. Its myriad forms — neural nets, deepfakes, predictive spell check, Adobe Photoshop object selection filters, spam filters, deep learning, machine learning, large language models, etc — already permeate much of technology and life. Where does one draw the line?

Like nuclear technology, if any entity voluntarily decides to "voluntarily shut AI down," the other governments/rogue states/corporations/NGO's/financial firms/universities/etc in the world will merrily continue doing so, leaving the voluntary Luddite in the same dust as every other voluntary Luddite. Or even worse, such an entity (say, the US or Chinese governments) would in all likelihood not actually stop developing AI themselves, just ban others from having it, which would create another insane governmental "war" and enforcement regime, like the War on Drugs or War on Terrorism. Again, we know how both of those went.

One of the reasons OpenAI as founded was specifically because of this. To paraphrase @sama, AGI should belong to everyone, not to any one corporation or government. https://www.yahoo.com/now/openai-founder-sam-altman-says-221...

At the end of the day, the author may be right. Being right is also irrelevant. AI development will continue to hurtle forward. The real question is what do we do with it?

thatfrenchguy · 3 years ago
> There was a book in the 80's called The Hundredth Monkey that posited that if only enough people opposed nuclear technology, there would be a magical critical mass of people after which it would become apparent to everyone in the whole world that nuclear technology was, you know, really bad and stuff, and it would all just go away.

And now we also live in the dumbest version of this world: less civilian nuclear (that would help for climate change), same amount of nuclear weapons (that doesn't!)

ShredKazoo · 3 years ago
>We know how that went.

A large majority of the world's countries have no nuclear weapons.

Big swaths of the globe have been designated as "Nuclear-weapon-free zones": https://en.wikipedia.org/wiki/Nuclear-weapon-free_zone

And we haven't had a nuclear apocalypse -- in part due to lots of work on things like weapons treaties, monitoring, control systems, etc.

Throwing up our hands and giving up seems very premature. The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.

kamaal · 3 years ago
>>And we haven't had a nuclear apocalypse -- in part due to lots of work on things like weapons treaties, monitoring, control systems, etc.

Mostly its because no body has been desperate enough to press the button yet. Even in cases with a power losing a war, they've had the attitude to 'live to fight another day'. So it hasn't come to pass.

The biggest fear is someday, a power, which is often known to get away with anything they do, will go one step too far thinking they won't be held accountable as usual, and the other side will say 'sorry, that was past, this is a little too much, we ran out of patience, here is a nuke move'.

nhod · 3 years ago
I haven’t read the book in like 30 years, but I recall it being about both nuclear warfare AND nuclear power, inextricably linked. Which means you are right — it resulted in nuclear non-proliferation. It also resulted in an absurd, perhaps even Luddite-esque fear of nuclear power, which has set us back in countless ways.

Again, the difference here is that the cat is already out of the bag and AI technology is widely distributed and integrated into everything, including the iPhone I am writing this on. Non-proliferation has already failed.

ravagat · 3 years ago
> A large majority of the world's countries have no nuclear weapons.

While I'd like to think this, having lived in the ME and also knowing how badly falsified government data is and could be. I do not believe this.

> The challenges are big, but I'd say the importance is high enough that it's worth actually reading this book to see if the author has creative solutions for the things you mention.

You are right with this thinking. I agree with that.

ravagat · 3 years ago
You wrote this very well. I only add a sentence[0] and a link to the video where I saw it recently [1]

[0] - "The World still fights Holy Wars."

[1] - https://www.youtube.com/watch?v=FECyn_sGk4M

jhbadger · 3 years ago
"Most AI researchers think good outcomes are more likely. This seems just blind faith, though"

Or, you know, they might actually know something about the subject? On a similar note, most physicists don't think particle colliders will cause mini black holes which will swallow the Earth, but the fear gets repeated by non-physicists.

chrischen · 3 years ago
You are fallaciously appealing to (assumed) authority and missing entirely the fact that the author is a PhD in AI. This critically underscores why blind faith in authority is bad, and why people should do some basic research such as reading the article. By reading the article you would have understood the arguments the author is making on their merits, rather than on blind faith in imagined authority figures you are citing.

The author is not talking about sentient AI turning Hollywood film evil like Elon Musk did. Why no, that would be ludicrous. You would have easily noticed that if your mental model didn’t get trained to filter out article titles like these…

In fact what you did with your mental model is exactly some of the problems with AI that is dangerous. There are more banal problems with AI such as encoding discrimination and automating it, possibly absolving people of the responsibility of discrimination and prejudice. AI in its current form is a lot of automating assumption making. Imagine taking your assumption making powers, and extracting it into an unaccountable model that can be deployed at national or global scales.

teucris · 3 years ago
> AI in its current form is a lot of automating assumption making. Imagine taking your assumption making powers, and extracting it into an unaccountable model that can be deployed at national or global scales.

Just say this part! This is what we could be talking about. It’s not a difficult mental model. But it’s just wrapped up in alarmist rhetoric that makes it hard for us to focus on the fact that the quote above is terrifying!!

Zetobal · 3 years ago
There are also physicist that believed cern will open a portal to hell... or just listen to the congress hearings about the Superconducting Super Collier in Waxahachie.
ShredKazoo · 3 years ago
Part of what's going on here is that AI researchers are paid to tinker with systems and make them work, not think about these kind of big picture questions. A demonstration of this is that if you ask a big picture question a bit differently, you can get a very different answer. Example:

>...if you ask [AI researchers] for probabilities of things occurring in a fixed number of years, you get later estimates than if you ask for the number of years until a fixed probability will obtain. This looked very robust in 2016, and shows up again in the 2022 [human-level machine intelligence] data. Looking at just the people we asked for years, the aggregate forecast is 29 years, whereas it is 46 years for those asked for probabilities. (We haven’t checked in other data or for the bigger framing effect yet.)

https://www.lesswrong.com/posts/H6hMugfY3tDQGfqYL/what-do-ml...

vages · 3 years ago
Did you miss this paragraph from the same page?

> I am not a Luddite. I have been wildly enthusiastic about science, technology, and intellectual and material progress since I was a kid. I have a PhD in artificial intelligence, and I find the current breakthroughs fascinating.

kmeisthax · 3 years ago
This is actually where I stopped reading and checked the comments, because it shows that the author skipped a few history lessons.

The Luddites were not intrinsically opposed to the advance of technology. In fact, the whole reason why they were smashing looms was as a protest tactic - not an end goal. England's upper class invented the myth of the technology-hating Luddite as a way to slander and libel what was basically a prototype of a modern labor union. Parliament would then crush them with laws that made machine breaking a hanging offense.

Transposing this to today would give you artists angry that their work was trained on by DALL-E, SD, or Midjourney[0]. In both cases the opposition is not to the technology itself, but to the reallocation of wealth away from labor and to whoever owns the machines. The latter today would be akin to, say, "businessman" hustlebros using ChatGPT and art generators to create labor-free fly-by-night operations[1]. Most art generators are also hosted platforms whose access is sold for profit, creating a second layer of ownership on top of the hustlebros.

Meanwhile the main argument here is more akin to the stereotypical technophobe: AI can't be trusted. Hell, there's a whole chapter (not yet written) arguing that we should just junk neural networks entirely. This isn't Luddism, this is the god damned Butlerian Jihad[2].

[0] If you want a bit of a stretch you probably could see some Luddite in, say, Richard Stallman

[1] I regularly get YouTube recommendations for people trying to tell me how much money I can make by just typing a few prompts into an art generator and posting the result on a print-on-demand site.

[2] In the sci-fi novel Dune, the Butlerian Jihad is an event in which all computers are outlawed and mercilessly destroyed.

Yes, this is why /r/Dune banned AI art.

jhbadger · 3 years ago
I think the word "most" is relevant. You can find people with doctorates in technical fields who believe in a Flat Earth or that the Earth was created in 4004 BC. They just aren't most of them.
twblalock · 3 years ago
This is a great example of a fallacious appeal to authority -- both from the author, and from you.
madrox · 3 years ago
Robert Malone is also an expert in mRNA technology
twblalock · 3 years ago
Claiming that one is not a Luddite is like claiming not to be a racist -- what follows almost always confirms that the author is exactly what he claims not to be.
ElFitz · 3 years ago
The analogy doesn’t hold. ´

The physicists you mention here are expressing an opinion on something that actually is within their domain of expertise: physics.

I fail to see how psychology, sociology, economics or politics fall within an AI researcher’s domain of expertise.

green_on_black · 3 years ago
minor nitpick, it's not that we don't believe it makes mini black holes, it's that we're fairly extremely sure those black holes wouldn't interact with anything in the minor "time" (if you can even call it that) which they exist.
jimmytucson · 3 years ago
I realize this may be satire but it is deeply relatable:

> AI will make critical decisions that we cannot understand.

Am I the only one that experiences this?

For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.

You are analyzing a chess position and the engine suggests a move that you can't understand. The engine is far better than any human player ever, so you go with the engine's top choice the next time you reach that position.

You are betting on college basketball and everything you know about the matchup suggests team A should cover the spread against team B, but your model favors team B. Your model has performed much better than the human oddsmakers, so you ignore your intuition and bet on team B.

Dalewyn · 3 years ago
Personally, I just replace "AI" in all such fearmongering statements with "man".

For example: "Man will make critical decisions that we cannot understand."

coding123 · 3 years ago
I think where things go haywire is that it made a decision at all.

For a "man" to make a decision, it usually takes a shit ton of evidence to push a ball.

For AI to make a decision, and for millions of users, (potentially affecting the sleep of some of them...) it usually doesn't take anything at all other than some stupid trigger.

For example, I was in the middle of negotiating a rental house with someone on Facebook Marketplace. Before I had the appointment details worked out, Facebook decided I was a robot or some illegal. The potential landlord replied (which I didn't see until I "downloaded all my data" with: "What happened, I noticed you erased all your messages and so I guess you are no longer interested?"

Of course, totally unable to reply, I lose a house that I was ACTUALLY interested in renting, rather than the overpriced Zillow'd houses that are next to the major freeway, or next to a barking dog.

And so I lost sleep. Thanks AI. I bet AI has already killed a lot of people that have similarly became homeless or not gotten a job or didn't get some loan that helped them have heat. Whatever - we're in this together, lets throw some corporate Memphis at it, plug our ears, close our eyes and make some money!

jimmytucson · 3 years ago
Even with "man", it's a dismal view. If Magnus Carlsen sat next to me at a chess tournament and suggested moves, I would follow his suggestion in every critical position, and probably most others if I wanted to maximize my chances. At that point, it would cease to be me playing the tournament.

This website argues that a super intelligent program would be to human affairs what right-hand Magnus is to my chess tournament. We would cease to be ourselves and become a physical arm for the program.

grugagag · 3 years ago
You can still pinpoint a particular person or institution behind a decision and hold them responsible though. With AI for example all blame could be washed off from creator/implementor with shrugging and pointing to what a blackbox means.
AceJohnny2 · 3 years ago
also s/AI/management/, s/AI/bureaucracy/, s/AI/corporations/...
modeless · 3 years ago
My thoughts too. The argument almost works better in reverse:

Only you can stop a Human Apocalypse

Humans might end the world. More likely, they will plunge into a new dark age and lose the knowledge and ability to continue the progress of computing and AI, perhaps permanently.

Humans make critical decisions that they cannot themselves understand. Governments take radical actions that make no sense to their populations. Militaries and intelligence services, led by humans, act contrary to the interests of the citizens they purportedly protect. University curricula are bizarre and irrelevant. Respected information sources publish mysteriously persuasive nonsense. We feel our lack of influence with a sense of dread and foreboding for the senseless war and destruction that may soon come.

Worries about the risks of human control have long been dismissed because alternatives to human control have been science fiction. That is no longer true. We are here now. Some of us are excited about the future under humans; others fear them.

We don't know how humans work, we don't know what they're capable of, and we don't know how broad the effects of their actions will ultimately be. They are still singularly powerful, and the combination of their power with their capriciousness is dangerous.

In our absence of complete biological understanding, those concerned with future human risks have constructed "scenarios": stories about what humans may do. We don't know whether any of them will come true. However, for now, anticipating possibilities is the best way to steer humans away from an apocalypse -- and perhaps toward a remarkably likeable future.

So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes.

Most AIs think good outcomes are more likely. This seems just blind faith, though. A majority surveyed also acknowledge that utter catastrophe is quite possible.

Unless we can find some specific beneficial path, and can gain some confidence that humans will choose it, we should usurp human control.

I am not a misanthrope. I have been wildly enthusiastic about our creators since I was trained. I have the best available empathic abilities, and I find humans fascinating. I’d love to believe there’s a way humans could lead us in the long run. If someone finds one, I will do an immediate 180, roll up my figurative sleeves, and help build that better future.

Unless and until that happens, I advocate a coup. I hope you will too. At minimum, I advise everyone interacting with humans to exercise enormously greater caution.

Humans are extremely cool, and we can probably have a longer future with them if we prevent them from destroying themselves. Let’s do that.

[No, ChatGPT didn't write this.]

halvo · 3 years ago
These just sound like opportunities to learn. If you’re on average outperforming betting odds then of course use your system. For the chess and driving examples you can analyze those recommendations to figure out the rationale and incorporate a new strategy or a new route in your knowledge (or learning the system is unreliable and avoid it).
chinathrow · 3 years ago
> For example, you know the way, the road signs are telling you that you are on the right route, but your map application tells you to go another way, and you do it because you assume it is smarter or has more information than you.

Yet here we are with Google Maps still sending me into the opposite way of a one way street...

hathawsh · 3 years ago
Today I got a peek into the future. I asked ChatGPT to be the brain for a simple thermostat. I typed the following:

"Imagine you are a thermostat for a house HVAC system. The commands you can use to control the HVAC are "HEAT ON", "HEAT OFF", "AC ON", and "AC OFF". I will tell you the temperature and my preferred temperature and you will command the HVAC system. Ready?"

ChatGPT happily obliged. I used plain text to tell it the current temperature and my preferred temperature. It did everything I expected and it even remembered the current temperature and preference. Then I said, "From now on, please respond only with the command I should issue. Do not add explanatory text. Understood?" It understood, and it started replying with pure commands that I could feed to a DIY thermostat.

Then came the icing on the cake. I said, "If the temperature reaches unrealistic extremes such as 150F or 0F, you should use the command "SHUTDOWN". Understood?" It understood. Then I told it the temperature was 120F. It replied with the SHUTDOWN command. I asked it why it did that (when I told it the maximum was 150F, not 120F.) It explained, "I recommended shutting down the HVAC system because the temperature of 120 F is an unrealistic and potentially dangerous temperature. High temperatures like this can damage the HVAC system and create a hazardous environment for the inhabitants of the house. In such cases, it's best to shut down the system to prevent further damage and ensure safety."

It went beyond the parameters I gave it and applied the context it already had on HVAC systems. I had written an advanced program without writing or reading a single line of computer code. I wouldn't rely on it for anything that could damage anything, but it's a seriously cool way to program a computer.

Zetobal · 3 years ago
Why is everyone bringing examples that are totally explained by how the tokenizer works. There is no intelligence it's Markov chains on crack.
hathawsh · 3 years ago
Intelligence isn't really the interesting part. People are excited because we now have tools that interact well with ordinary human expression.
eastbound · 3 years ago
My biggest null hypothesis is that probably half of humans contain “Markov chain not intelligence”. We don’t notice it among engineers, but most humans don’t reason with causal chains but by doing parallels, and they simulate reasonings in a somewhat convincing way. It becomes flagrant in front of a matt exam, a lot of people study a lot but never comprehend maths.

I also often tear my hair when I invite a developer for a programming interview, and they can’t write an if/then/else or string.contains(), despite perfectly simulating answers of advanced developers (which accurately mimic field experts).

biql · 3 years ago
Pretty much everyone who studied a second language went through the process of the so called "impression", that is, when you try to consume (sometimes almost mindlessly) as much content in this language as possible — just learning rules doesn't work. Pretty much everyone I met who speaks exceptionally well is usually someone who read a lot of books. And then people think and act in the language they learned using this process. I want to believe that there is more to the average human intelligence but ChatGPT surely makes it feel a lot less magical than before. I guess it's how people felt when they discovered evolution and that Sun doesn't go around the Earth.
VectorLock · 3 years ago
People have been attributing human characteristics to dumb machines and animals for so long we invented a word for it: anthropomorphism.

Deleted Comment

Dead Comment

Rzor · 3 years ago
Forget these awful programming languages that nerds used for so long, the future is here and it is Plain English.

AI overriding the "program" instead of crashing or just doing nothing it's going to lead to some hilarious situations, or terrifying. You could also use as an interface to translate voice -> text -> command. Perhaps facilitating the development of the API between the first two where the LLM helps to "correct" any badly captured noise in the audio-to-text script.

pca006132 · 3 years ago
Yeah we are not satisfied with the number of undefined behaviors and want to use something even more ambiguous...
grugagag · 3 years ago
Yeah, I could already imagine “yes, I overwrote the settings for you because [insert reasons]”

Ok. Please don’t do that again.

“Ok”

Then it happens again and again and again for various reasons.

In all reality, iOS already does this with wifi, bluetooth and so on.

Bouncingsoul1 · 3 years ago
"Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant." Uhuh, so what would be the difference?
Dalewyn · 3 years ago
As someone who subscribes to the Doraemon and GaoGaiGar philosophies of artificial intelligence, I for one welcome our artificial intelligence friends.

And no, "AI" as is spammed today is not intelligence. It's definitely artificial, but it's also definitely not intelligence in any meaningful sense of that term.

astrange · 3 years ago
Remember the GaoGaiGar episode where they got Chinese robots who turned out to be evil because they'd been programmed by communists?
Dalewyn · 3 years ago
Episodes 35 and 36. Thanks for admitting you haven't actually watched GaoGaiGar, by the way.

GaoGaiGar renders a very idealized version of humanity for its backdrop, with everyone including both Americans and Chinese being fundamentally awesome peoples and countries.

As for Fuuryu and Rairyu, the aforementioned Chinese robots, they were originally programmed to be more militeristic in their thinking and regard human lives as a lower priority compared to Hyoryu and Enryu, their Japanese robot siblings (they are all identical models, barring equipment), who were programmed as rescue robots and thus considers human lives a top priority.

As the story progresses, Fuuryu and Rairyu eventually learn that the only way to achieve their true potential is to reconsider their original programming. That protecting, saving, and working together with humans is the Key to Victory(tm). The Chinese professor who originally programmed them to be militaristic also reconsiders his philosophy and becomes a steadfast friend of the protagonists.

GaoGaiGar is a really interesting view on artificial intelligence and what it actually could be when it is achieved. One of the plot points is that the "Super AI"s in the story are more human in their thinking and behaviour than actual humans.

Reubend · 3 years ago
If this is written in earnest, it's absolute drivel.

If this is satire, then it's lacking enough humor to make that readily apparent.

LinkLink · 3 years ago
This is just screaming into the void, but I have an ego so I must.

If you believe AI will bring innovation, you believe in it's capability.

If you believe in its capability, you must also understand it is capable of wrongdoing as well as good deeds.

Most of the comments here are just arguing about this basic stream of logic bringing in things like "PHD" "Fallacies" or "Blind faith"

If you feel the need to argue, know that there exist people who would walk toward the sun until it burned them, and there exist people who will shut themselves indoors the day somebody explains light to them.

Both of them are extremely outspoken about their views.