While I agree with Maciej's central point, I think the inside arguments he presents are pretty weak. I think that AI risk is not a pressing concern even if you grant the AI risk crowd's assumptions. Elided from https://alexcbecker.net/blog.html#against-ai-risk:
The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses... AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.
Exactly. The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else. And AI is not
the only technology with the potential to worsen inequality in the world.
Human beings have been extremely easy to kill for our entire existence. No system of laws can possibly keep you alive if your neighbors are willing to kill you, and nothing can make them actually unable to kill you. Your neighbor could walk over and put a blade in your jugular, you're dead. They could drive into you at 15MPH with their car, you're dead. They could set your house on fire while you're asleep, you're dead.
The only thing which keeps you alive is the unwillingness of your neighbors and those who surround you to kill you. The law might punish them afterward, but extensive research has shown that it provides no disuasion to people who are actually willing to kill someone.
A military AI being used to wipe out large numbers of people is exactly as 'inevitable' as the weapons we already have being used to wipe out large numbers of people. The exact same people will be making the decisions and setting the goals. In that scenario, the AI is nothing but a fancy new gun, and I don't see any reason to think it would be used differently in most cases. With drones we have seen the CIA, a civilian intelligence agency, waging war on other nations without any legal basis, but that's primarily a political issue and the fact that it can be done in pure cowardice, without risking the life of those pulling the trigger, which I think is a distinct problem from AI.
That's exactly what Maciej spends the last third of the article saying: that the quasi-religious fretting about superintelligence is causing people to ignore the real harm currently being caused by even the nascent AI technology that we have right now.
We don't need ai for massive differences in military effectiveness. That is already here. The us can just destroy most country and substate actors with minimal causalities. The issue is already just difficult matters like differentiating friendlies/neutrals from enemies and not creating more enemies via collateral damage and other forms of reactions.
> The "Terminator" scenario of a rogue malfunctioning AI is a silly distraction from the real AI threat, which is military AIs that don't malfunction. They will give their human masters practically unlimited power over everyone else.
To be fair, it's a small step from effective AI that doesn't malfunction, to an AI over which humans have lost control. It's precisely one vaguely specified command away in fact, and humans are quite excellent at being imprecise.
" They will give their human masters practically unlimited power over everyone else."
Anyone power that has significant leverage over another power already has that ability.
A bunch of 'super smart evil AI robots' will not be able to physical deter/control 500 million Europeans - but - a small Army of them would be enough to control the powers that be, and from there on in it trickles down.
Much the same way the Soviets controlled Poland et. al. with only small installations. The 'legitimate threat of violent domination' is all that is needed.
So - many countries already have the power to do those things to many, many others via conventional weapons and highly trained soldiers. That risk is already there. Think about it: a decent soldier today is already pretty much a 'better weapon' than AI will be for a very, very long time. And it' not that hard to make decent soldiers.
The risk for 'evil AI robots' is that a non-state, inauthentic actor - like a terrorist group, militia etc. gets control of enough of them to do project power.
The other risk I think, is that given the lack of bloodshed, states may employ them without fear of political repercussions at home. We see this with drones. If Obama had to do a 'seal team 6' for every drone strike, many, many of those guys would have died, and people coming home on body bags wears on the population. Eventually the war-fever fades and they want out.
This is basically why a lot of people didn't want Google to become a defense contractor while researching military robots. If it did, it would've naturally started to use DeepMind for it. And that's a scary thought.
People are worried about AI risk because ensuring that the strong AI you build to do X will do X without doing something catastrophic to humanity instead is a very hard problem, and people who have not thought much about this problem tend to vastly underestimate how hard it is.
Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere. Now you might say, why don't we just hardcode in a goal to the AI like "solve aging, and also don't hurt anyone"? And ensure that the AI's method of achieving its goals won't have terrible unintended consequences? Oh, and the AI's goals can't change? This is called the AI control problem, and nobody's been able to solve it yet. It's hard to come up with good goals for the AI. It's hard to translate those goals into math. It's hard to prevent the AI from misinterpreting or modifying its own goals. It's hard to work on AI safety when you don't know what the first strong AI will look like. It's hard to prove with 99.999% certainty that your safety measures will work when you can't test them.
Things will not turn out okay if the first organization to develop strong AI is not extremely concerned about AI risk, because the default is to get AI control wrong, the same way the default is for planets to not support life.
My counterpoint to the risks of more limited AI is that limited AI doesn't sound as scary when you rename it statistical software, and probably won't have effects much larger in magnitude than the effects of all other kinds of technology combined. Limited AI already does make militaries more effective, but most of the problem comes from the fact that these militaries exist, not from the AI. It's hard for me to imagine an AI carrying out a military operation without much human intervention that wouldn't pose a control problem.
I am feeling like perhaps you didn't read the article? Many of these arguments are the exact lines of thinking that the author is trying to contextualize and add complexity to.
These are not bad arguments you are making, or hard ones to get behind. There are just added layers of complexity that the author would like us to think about. Things like how we could actually 'hard-code' a limit or a governor on certain types of motivation. Or what 'motivation' is even driven by at all.
I think you'll enjoy the originally linked article. It's got a lot to consider.
> Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere.
This is a sequence of deductive reasoning that you brought up there. Quite natural for human beings, but why would the paperclip maximiser be equipped with it?
Seriously, the talks specifically argues most of the points that you brought up.
Shit is complicated yo. Complicated like the world is - not complicated like an algorithm is. Those are entirely different dimensions of complicated that are in fact incomparable.
> to do X will do X without doing something catastrophic to humanity instead is a very hard problem
This scenario I agree with. For instance: the AI decides that it doesn't want to live on this planet and consumes our star for energy, or exploits our natural resources leaving us with none.
The whole AI war scenario is highly unlikely. As per the article, the opponents of AI are all regarded as prime examples of human intelligence - many of them have voiced opposition to war and poverty (by virtue of being philanthropists). Surely something more intelligent than humans would be even less inclined to wage war. Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?
> AI control
My argument is with exception to this scenario. By attaching human constraints to AI, you are intrinsically attaching human ideologies to it. This may limit the reach of the superintelligence - which means that we create a machine that is better at participating in human-level intelligence than humans are. Put simply, we'd plausibly create an AI rendition of Trump.
> It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next
You know, you don't need to go that far. You know what a great way to kill a particular group of people is? Well, let's take a look at what a group of human military officers decided to do (quoting from a paper of Elizabeth Anscombe's, discussing various logics of action and deliberation):
"""
Kenny's system allows many natural moves, but does not allow the inference from "Kill everyone!" to "Kill Jones!". It has been blamed for having an inference from "Kill Jones!" to "Kill everyone!" but this is not so absurd as it may seem. It may be decided to kill everyone in a certain place in order to get the particular people that one one wants. The British, for example, wanted to destroy some German soldiers on a Dutch island in the Second World War, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.)
"""
There's a footnote:
"""
Alf Ross shews some innocence when he dismisses Kenny’s idea: ‘From plan B (to prevent overpopulation) we may infer plan A (to kill half the population) but the inference is hardly of any practical interest.’ We hope it may not be.
"""
The internet also brought us wikipedia, google, machine learning and a place to talk about the internet.
Machine learning advances are predicated on the internet, will grow the internet and will become what we already ought to know we are. A globe spanning hyper intelligence working to make more intelligence at break neck pace.
Somewhere along this accelerating continuum of intelligence, we need to consciously decide to make thing awesome. So people aim to build competent self driving cars, that way fewer people die of drunk driving or boredom. Let's keep trying. Keep trying to give without thought of getting something in return. Try to make the world you want to live in. Take a stand against things that are harmful to your body (in the large sense and small sense) and your character. Live long and prosper!!!
Yeah but we were never forced into this global boiler room where we're constantly confronted with each other's thoughts and opinions. Thank you social media. It's like there is no intellectual breathing room anymore. Make anyone go mad and want to push the button..
It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next.
This has been done many times by human-run militaries; would AI make it worse somehow?
Groups of humans acting collectively can look a lot like an "AI" from the right perspective. Corporations focused on optimizing their profit spend a huge amount of collective intelligence to make this single number go up, often at the expense of the rest of society.
No doubt that his "inside arguments" have been rebutted extensively by the strong AI optimists and their singularity priests. After all, dreaming up scenarios in which robotic superintelligence dominates humanity is their version of saving the world.
That's why I found the "outside arguments" here equally important and compelling.
> The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.
The best rebuttals to all this are the least engaging.
"Dude, are you telling me you want to build Skynet?"
The posters rebuttals against the threat are totally under-thought cop-outs. For example, his first argument about "how would Hawking get his cat in a cage?" just put food in it. It's not hard to imagine an AI could come up with a similar motivation to get humans to do what it wants.
That's not to say that his general premise is wrong, but it's hard for me to take it seriously when his rebuttals are this weak.
> If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.
Occasionally the crazies are right. Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk, except turns out they were actually doing this the whole time.
The fact that the world hasn't ended tells us virtually nothing about how likely the end of the world is, for the simple reason that if the world had ended we wouldn't be here to talk about it. So we can't take it as evidence, at least, not conclusive evidence. Note also that the same argument works just as well against global warming as it does against AI risk.
Turn it around. Suppose there was a genuinely real risk of destroying the world. How would you tell? How would you distinguish between the groups that had spotted the real danger and the run-of-the-mill end-of-the-world cults?
I'll push back against the idea of smart factories leading to "a vast chasm between a new, tiny Hyperclass and the destitute masses." I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money. Income inequality obviously benefits the rich (in that they by definition have more money), but only up to a point. We won't devolve into an aristocracy, at least not because of automation.
> I mean, if the masses are destitute, they can't afford the stuff being made at those fancy factories, so the owners of those factories won't make money.
I think that's beside the point. Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?
It's not like they need money to pay other people (the destitute masses are useless to them). With their only inherent "capital", --the ability to work-- made worthless by automation, the destitute masses have no recourse-- they get slowly extinguished, until only a tiny fraction of humanity is left.
What matters is the total size of the market and the total size of the labor pool. If the Hyperclass have more wealth than all of humanity had prior, they don't need to sell to the masses to make money. If the labor pool is mostly machines they own, they don't need to pay the masses, or even a functioning market among the masses to enable it. In the degenerate case, where a single individual controls all wealth, if they have self-running machines that can do everything necessary to make more self-running machines, that individual can continue to get richer (in material goods; money makes no sense in this case).
Money is just a stand-in for resources and labor, and if automation makes labor very very cheap, the rich will only need the natural resources the poor sit on, not anything from the poor themselves.
- a military AI in the hands of bad actors that does bad stuff with it intentionally.
- a badly coded runaway AI that destroys earth.
These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.
Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."
Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.
I mean if you made me a disembodied mind connected to the internet that never needs to sleep and can make copies of itself we would be able to effectively take over the world in like ~20-50 years, possibly much less time then that.
I make lots of money right now completely via the internet and I am not even breaking laws. It is just probable that an AI at our present level of intelligence could very quickly amass a fortune and leverage it to control everything that matters without humanity even being aware of the changeover.
There are also nearer-term threats (although I'd likely disagree on many specifics), but I don't see how that erases longer-term threats. One nuclear bomb being able to destroy your city now doesn't mean that ten thousand can't destroy your whole country ten years down the line.
I think the point (which is addressed with Maciej's Almogadro callback near the end) is that the longer term threat being speculated about and dependent on lots of very hypothetical things being true is pretty much irrelevant in the face of bigger problems. I mean, yes, a superpower that had hard military AI could wreak a lot of havoc. On the other hand, if a superpower wants to wipe out my corner of civilisation it can do so perfectly happily with the weapons at its disposal today (though just to be on the less-safe side, the US President Elect says he wants to build a few more). And when it comes to computer systems and ML, there's a colossal corpus of our communications going into some sort of black box that tries to find evidence of terrorism that's probably more dangerous to the average non-terrorist because it isn't superintelligent.
Ultimately, AI is neither necessary nor sufficient for the powerful to kill the less powerful.
And if it's powerful people trying to build hard military AI, they probably aren't reading LessWrong to understand how to ensure their AI plays nice anyway.
It's possible that we could face both AI risks consecutively! First a tiny hyperclass conquers the world using a limited superintelligence and commits mass genocide, and then a more powerful superintelligence is created and everyone is made into paperclips. Isn't that a cheery thought. :-)
The real danger of ai is that they allow people to hide ethically dubious decisions that they've made behind algorithms. You plug some data into a system and a decision gets made and everyone just sort of shrugs their shoulders and doesn't question it.
Yes, that's the ultimate threat. But in the meantime, the threat is the military will think the AI is "good enough" to start killing on its own and the AI actually gets it wrong a lot of the time.
Kind of like what we're already seeing now in Courts, and kind of how NSA and CIA's own algorithms for assigning a target are still far less than 99% accurate.
"I live in California, which has the highest poverty rate in the United States, even though it's home to Silicon Valley. I see my rich industry doing nothing to improve the lives of everyday people and indigent people around us."
This is trivially false. Over a hundred billionaires have now pledged to donate the majority of their wealth, and the list includes many tech people like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz, Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.
This only includes purely non-profit activity; it doesn't count how eg. cellphones, a for-profit industry, have dramatically improved the lives of the poor.
I feel the problem is the fact there is 100 billionaires in he first place, no one gets rich on their own. Gates et al, are clever, but didn't get where they are totally independently without others support, so they should give back.
Also, some of these billionaires are running companies that are great at tax avoidance, probably most of them. Now what? They get to pick and choose where they get to spend there/invest money? I don't buy it.
I believe in wealth, just not this radical wealth separation .
Countries that have no rich people are never prosperous. You can raise marginal income tax rates from, say, 60% to 70%, and maybe that's a good idea overall, but it doesn't get rid of billionaires. High-tax Sweden has as many billionaires per capita as the US does: https://en.wikipedia.org/wiki/List_of_Swedes_by_net_worth
If you raise the marginal tax rate to 99%, then you get rid of billionaires, but you also kill your economy. There are all the failures of communist countries, of course, but even the UK tried this during the 60s and 70s. The government went bankrupt and had to be bailed out by the IMF. Inflation peaked at 27%, unemployment was through the roof, etc.:
Calfornia is mismanged to hell. The bay area has some of the worst roads in the nation with very mild weather and a wealthy tax base. It cost $8billion to make 1 or 2 miles of central subway and only €11 billion to make the worlds longest tunnel under the alps. I have the same income tax rate as I did in canada yet there isnt universal healthcare and far more economic inequality. It goes on and on. If you trippled the money base I dont know how much better it would get.
They don't give back? Microsoft employs 100k+ people. That's giving back. These people all pay taxes and give back to society because Microsoft gave them a job. Because Gates happened. And in the case of Gates, let's not forget Bill & Melinda Gates Foundation.
What the hell have you done for society?
This story is very similar for most billionaires. They create A LOT of jobs and careers.
Can you explain to me why you think taking all of the wealth of 100 billionaires will help the poor? 100 billion spread over the population of California is less than $3000/person. So you can wipe out all of the billionaires and give everyone $50/week for one year, what is that going to change?
Yes, and even if you ignore philanthropy, the tech industry generates enormous amounts of tax revenue, which is supposed to be spent by the government to help improve the lives of "everyday people and indigent people".
A question people don't ask enough is: given that we give vast trillions of dollars to the government, most of which is spent on various kinds of social programs (health care, education, social security, etc), why is there STILL so much poverty, joblessness, homelessness, drug use, crime, and other kinds of suffering in the US?
Your assumption is that billions of dollars can be simply converted into poverty reduction.
It seems possible to me that the technology to turn money into less poverty not only doesn't exist, but that the social structures that make men like Bill Gates rich also make it difficult to create such technology.
Your implicit argument is that todays rich somehow care more about improving society than yesterday's did, which will cause these concentrations of wealth to lead to a different outcome. I'm not sure I see much of a difference between Gates and Carnegie. Different ideas about what the world needs, but not a particularly different approach to capital.
How does a pledge about something you may or may not do in the future helps poor people today? How does "the majority of their wealth" address income inequality? Will said billionaires give away so much that they cease to be billionaires or even millionaires?
And how are the actions of a few billionaires relevant to what the industry does as a whole? Does Google, Facebook, or, God forbid, Uber, address the problems of poverty and inequality (which are separate problems) as a company?
To a very large extent, charity is irrelevant; charity is a way of buying oneself a conscience without actually changing anything in the world; without even addressing the problems or thinking about them.
I say they should keep their money and control! Educate and involve them in important things early. The HARC initiative looks great. Such an initiative could answer questions like:
What are important problems?
What do we need to do to efficiently solve such problems?
Have we spent too much effort on a single solutions? Is it time to try another way?
What can we do to bypass bureaucracy?
I trust businesses to have a mindset for risk and results. In my opinion charities behave more like guardians preserving/nurturing more than making a 10X change.
That interpretation may make it true, but it would also seem to make it irrelevant. Unless Californians somehow have greater moral significance than people elsewhere.
Err, I hate to be the one to break it to you but those billionaires pledging to donate their wealth? It's just a tax dodge. They're moving their money into foundations before we pass stronger tax laws than we have at the moment. And it allows their families to continue to live off of (via salaries for running the foundations) the wealth for generations to come.
Which is not to say they don't do some good work with it, the Bill and Melinda Gates foundation has done some great work fighting malaria and bringing fresh water to poor communities.
But these same foundations also do a lot of other work, like furthering charter schools which benefits wealthy families to the detriment of poor communities here in the US.
The ACA could barely roll out a website without tragic failure, how have cell phones "dramatically improved" the lives of the poor? They're still subject to as much bureaucracy and denial of basic services as ever. 4G hasn't improved public transpo.
I suppose the poor are no longer subject to long-distance fees during daytime.
This article explicitly endorses argument ad hominem:
"These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult. Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good."
The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion, as in the cult case. But the cases where it doesn't work can be really, really important. 99.9% of 26-year-olds working random jobs inventing theories about time travel are cranks, but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
>This article explicitly endorses argument ad hominem
That's because it's very effective in practice.
In the real world (which is not a pure game of logical reasoning only played by equals and fully intelligent beings without hidden agendas), the argument ad hominem can be a very powerful way to cut through BS, even if you can't explain why they are BS by pure reason alone.
E.g. say a person A with IQ 110 talks with a person B of IQ 140. The second person makes a very convincing argument for why the first person should do something for them. Logically it is faultless as far as person A can see. But if the person A knows that person B is shady, has fooled people in the past, has this or that private interest in the thing happening, etc, then he might use an "argument ad hominem" to reject B's proposal. And he would be better of for it.
The "argument ad hominem" is even more useful in another very common scenario: when we don't have time to evaluate every argument we hear, but we know some basic facts about the person making the argument. The "argument ad hominem" helps us short out potentially seedy, exploitative, etc. arguments fast.
Sure, it also gives false negatives, but empirically a lot of people have found that it gives more true negatives/positives (that is, if they want to act on something someone says, without delving into it finely, the fastest effective criterion would be to go with whether they trust the person).
This is not only because we don't have the time to fully analyze all arguments/proposals/etc we hear and need to find some shortcuts (even if they are imperfect), but also because we don't have all the details to make our decisions (even if we have a comprehensive argument from the other person, there can be tons of stuff left out that will also be needed to evaluate it).
It's a reasonable heuristic for when you just don't have the time or energy, but if you are giving a 45min keynote speech on the topic I think you are expected to make the effort to judge an idea on its merits.
Einstein didn't look like a crank though. His papers are relatively short and are coherent, he either already had a PHD in physics or was associated with an advisor (I didn't find a good timeline; he was awarded the PHD in the same year he published his 4 big papers).
Cranks lack formal education and spew forth the gobbledygook in reams.
By this measure, I would say Bostrom is not a crank. Yudkowsky is less clear. I'd say no, but I'd understand if Yudkowsky trips some folks' crank detectors.
He was awarded the doctorate for one of the papers (the photovoltaic one, if memory serves), after extending it one sentence to meet the minimum length requirement.
> but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity)
No you don't, you just don't catch it right away, relativity actually holds up under scrutiny. Besides, I reject the premise anyway.
Einstein did serious work on the photoelectric effect first and then gradually worked towards relativity. Outside of the pop history he had very little in common with cranks. This is basically the result you end up seeing when you look into any of these examples used to try and argue against the ability to pattern match cults and cranks, the so-called false negatives never (to my knowledge) actually match the profile. Only the fairy tale built around their success matches it.
So it is with cult-ish behaviour as well, these are specific human failures that self-reinforce and while some of their behaviour occurs in successful people (especially successful and mentally ill or far from neuro-typical people) there is a core of destructive and unique behaviour evident in both that you absolutely should recognize and avoid. Not just the statistical argument that you will gain much more than you lose by avoiding it, it's that it is staggeringly improbably that you will lose anything.
Yep, Einstein was an expert in his field who wrote a couple of ground-breaking papers in his field. As far as I can tell no-one who is an expert in AI (or even similar fields) is worried at all about super-intelligence.
That isn't exactly what it's doing. It's proposing that there are two ways we evaluate things — deeply examining and rationally analyzing them in depth to identify specific strengths and weaknesses, and using the very fast pattern-matching "feeling" portions of our brains to identify nonspecific problems. These are cognate to "System 1" and "System 2" of Thinking Fast And Slow.
Having established that people evaluate things these two ways, the author then says, "I will demonstrate to both of these ways of thinking that AI fears are bogus."
It's also a perfectly apt description of, say, certain areas in academia—one that I'm pretty sympathetic too after seeing postmodern research programs in action! Hell, postmodernism is a bigger idea that eats more people than superintelligence could ever hope.
And yet I suspect that many of the people swayed by one application of the argument won't be swayed by the other and vice versa. Interesting, isn't it?
OK, so I ignore Einstein, and I miss general relativity. And then what? If it's proven true before I die, then I accept it; if it isn't, or if it is and I continue to ignore it, I die anyway. And then it's 2015 and it's being taught to schoolchildren. High-school-educated people who don't really know the first damn thing about physics, like non-hypothetical me, still have a rough idea what relativity is, and the repercussions are.
Meanwhile, rewind ~100 years, and suppose you ignored the luminiferous aether. Or suppose you straight away saw Einstein was a genius? Oh, wait... nobody cares. Because you're dead.
So I'm not sure what the long-term problem is here.
Meanwhile, you, personally, can probably safely ignore people that appear to be cranks.
The argument ad hominem here actually refers to the credibility of the source of an argument. If someone has a clear bias (cults like money and power), then you keep in mind that their arguments are the fruit of a poisoned tree.
That example is bad, but the arguments aren't quite as objectionable.
"What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.
"I'd like to talk for a while about the outside arguments that should make you leery of becoming an AI weenie. These are the arguments about what effect AI obsession has on our industry and culture:..."
...grandiosity, megalomania, avoidance of actual current problems. Aside from whether the superintelligence problem is real, those believing it is seem less than appealing.
"This business about saving all of future humanity [from AI] is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn't have a basic level of material comfort."
>We had the same exact arguments used against us under communism, ...
What nonsense. None of the credible people suggesting that superintelligence has risk are spouting generic arguments that apply to communism or any previous situation.
The question is not IF humanity will be replaced but WHEN and HOW.
Clearly, in a world with superintelligence growing at a technological pace, instead of evolutionary pace, natural humanity will not remain dominant for long.
So it makes enormous sense to worry about:
* Whether that transition will be catastrophic or peaceful.
* Whether it happens suddenly in 50 years or in a managed form over the next century.
* Whether the transition includes everyone, a few people, or none of us.
SR and GR explicitly allow time-travel into the future. Which isn't a fully general Time Machine, of course, but is a huge change from 19th-century physics. If SR had just been invented today, and someone who thought it was crazy and didn't know the math was writing a blog post about it, I 100% expect they'd call it "the time travel theory" or some such thing.
It's not an ad hominem argument if the personal characteristics are relevant to the topic being discussed. The personal characteristics of the people in his example are have empirically been found to be a good indicator of crankhood.
Hawking, Musk, et. al. are highly successful people with objectively valuable contributions who are known to be about to think deeply and solve problems that others have not.
They are as far from cranks as anyone could possibly be.
Anyone can find non-argument related reasons to suggest anyone else is crazy or a cultist, because no human is completely sane.
What someone cannot do (credibly), is claim that real experts are non-expert crazies, over appearances while completely ignoring their earned credentials.
> The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion ...
You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.
There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.
> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.
A good heuristic leads to a reduction in the overall cost of a decision (combining the cost of making the decision with the cost of the consequences if you get it wrong).
A heuristic like "it's risky to rent a car to a male under 25" saves a lot of cost in terms of making the decision (background checks, accurately assessing the potential renter's driving skills and attitude towards safety, etc.) and has minimal downside (you only lose a small fraction of potential customers) and so it's a good heuristic.
A heuristic like "a 26-year-old working a clerical job who makes novel statements about the fundamental nature of reality is probably wrong" does reduce the decision cost (you don't have to analyze their statements) but it has a huge downside if you're wrong (you miss out on important insights which allow a wide range of new technologies to be developed). So even though it's a generally accurate heuristic, the cost of false negatives means that it's not a good one.
> If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
So, is it a good heuristic to conclude that since crime is related to poverty and minorities tend to be poor, minorities qua minorities ought to be shunned?
"Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn't want to go, you know what would happen to Einstein. He would have to resort to a brute-force solution that has nothing to do with intelligence, and in that matchup the cat could do pretty well for itself."
This seems, actually, like a perfect argument going in the other direction. Every day, millions of people put cats into boxes, despite the cats not being interested. If you offered to pay a normal, reasonably competent person $1,000 to get a reluctant cat in a box, do you really think they simply would not be able to do it? Heck, humans manage to keep tigers in zoos, where millions of people see them every year, with a tiny serious injury rate, even though tigers are aggressive and predatory by default and can trivially overpower humans.
I'm not arguing that it's useless to outsmart a cat. I'm disputing the assumption that being vastly smarter means your opponent is hopelessly outmatched and at your mercy.
If you're the first human on an island full of tigers, you're not going to end up as the Tiger King.
Well, as a cat owner I give you this: like with any other animal, there are tricks you can exploit to coerce a cat without using physical force.
One way to get a cat into a carrier - well, the catfood industry created those funny little dry food pellets that are somehow super-addictive. Shake the box, my cat will come. Drop one in the carrier, it surely will enter. Will it eventually adapt to the trick? Maybe, but not likely if I also do this occasionally without closing the carrier door behind the cat.
Yes, we can outsmart the cat. Cats are funny because they do silly, unpredictable things at random, not because they can't be reliably tricked.
The issue is that in this case "vastly smarter" is not smart enough to truly understand the cat. It's conceivable an AI with tons of computing power could simulate a cat and reverse engineer its brain to find any stimulus that would cause it to get in the cage.
I also think this isn't a very good analogy. In this case we're talking about manipulating humans, where we already know manipulation is eminently possible.
Heck it wouldn't even need psychological manipulation. Hack a Bitcoin exchange or find another way of making money on the internet, then it can just pay someone absurd sums to do what it wants.
And yet somehow humans rule the planet and tigers are an endangered species, surviving only as a result of specific human efforts to conserve them because some humans care about doing so.
How well an AI could survive on a desert island is an irrelevant question when Amazon, Google and dozens of others are already running fully (or as near as makes no difference) automated datacentres controlled by a codebase that still has parts written in C. Hawking can easily get the cat in the container: all he has to do is submit a job to taskrabbit.
Of course, if you take your average city dweller on your island, he will probably die of thirst before the tigers get to him. But take an (unarmed) navy seal as your human on the island and I'm pretty sure in a couple of months he will be the Tiger King.
And Hawking would just ask his assistant to put the cat into the box. You are artificially depriving him of his actual resources to make a weak point.
There's no mention of iteration here, which is really what powers intelligence-based advantages.
The first time Random Human A attempts to get Random Cat B into a box, they're going to have a hard time. They'll get there eventually, but they'll be coughing from the dust under the bed, swearing from having to lift the bloody sofa up, and probably have some scratches from after they managed to scare the cat enough for it to try attacking.
However, speaking as a cat owner, if you've iterated on the problem a dozen or so times, Cat B is going in the box swiftly and effortlessly. Last time I put my cat in its box, it took about 3 minutes. Trying for the bed? Sorry, door's closed. Under the sofa? Not going to work. Trying to scratch me? Turns out cat elbows work the same way as human elbows.
The same surely applies to a superintelligent AI?
(Likewise with the Navy Seal On The Island Of The Tigers. Just drop one SEAL in there with no warning? He's screwed. Give a SEAL unit a year to plan, access to world experts on all aspects of Panthera tigris, and really, really good simulators (or other iteration method) to train in? Likely a different story. )
They always miss a critical and subtle assumption: that intelligence scales equal to or faster then the computational complexity of improving that intelligence.
This is the one assumption I most skeptical of. In my experience, each time you make a system more clever, you also make it MUCH more complex. Maybe there is not hard limit on intelligence, but maybe each generation of improved intelligence actually takes longer to find the next generation, due to the rapidly ramping difficulty of the problem.
I think people see the exponential-looking growth of technology over human history, and just kinda interpolate or something.
I think the issue is that once do manage to build an AI that matches human capabilities in every domain, it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons can pulse. The speed of digital signal also means that artificial brains won't be size-limited by signal latency in the same way that human brains are. We will be able to scale them up, optimize the hardware, make them faster, give them more memory, perfect recall.
Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.
> it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons
You're equating speed with quality. There's no reason to assume that. Do you think an AI will be better at catching a fieldmouse than a falcon? Do you think the falcon is limited by speed of thought? Many forms of intelligence are limited by game theory, not raw speed. The challenge isn't extracting large quantities of information, it's knowing which information is relevant to your ends. And that knowledge is just as limited by the number of opportunities for interaction as the availability of analytic resources.
Think of it this way: most animals could trivially add more neurons. There's plenty of outliers who got a shot, but bigger brainded individuals obviously hit diminishing returns, otherwise the population would've shifted already.
There's also another thing. AI may not need to be superhuman, it may be close-but-not-quite human and yet be more effective than us - simply because we carry a huge baggage of stuff that a mind we build won't have.
Trust me, if I were to be wired directly to the Internet and had some well-defined goals, I'd be much more effective at it than any of us here - possibly any of us here combined. Because as a human, I have to deal with stupid shit like social considerations, random anxiety attacks, the drive to mate, the drive of curiosity, etc. Focus is a powerful force.
But making them linearly faster to scale them up doesn't help if the difficulty of the problems they face isn't linear. If it comes to making more clever things, I strongly doubt they are even a remotely small polynomial.
You're equalling human time to AI/computer time. A one day old neural net has already experienced multiple lifetimes worth of images before it is able to beat you at image recognition. It's not trivial, but we just gloss over the extremely complex training phase because it runs on a different clock speed than us.
I can't disagree enough. Having recently read Superintelligence, I can say that most of the quotes taken from Bostrom's work were disingenuously cherry-picked to suit this author's argument. S/he did not write in good faith. To build a straw man out of Bostrom's theses completely undercuts the purpose of this counterpoint. If you haven't yet read Superintelligence or this article, turn back now. Read Superintelligence, then this article. It'll quickly become clear to you how wrongheaded this article is.
Too late to edit, so I'll post just a few examples here:
>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.
Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.
Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.
Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.
Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.
To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.
"The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation."
Why wouldn't it if it is able to? It doesn't have to "want" to self-improve, it only has to want anything that it could do better if it was smarter. All it needs is the ability, the lack of an overwhelming reason not to, and a basic architecture of optimizing towards a goal.
If you knew an asteroid would hit the earth 1 year from now, and you had the ability to push a button and become 100,000x smarter, I would hope your values would lead you to push the button because it gives you the best chance of saving the world.
- AI as management. Already, there is at least one hedge fund with an AI on the board, with a vote on investments.[1] At the bottom end, there are systems which act as low-level managers and order people around. That's how Uber works. A fundamental problem with management is that communication is slow and managers are bandwidth-limited. Computers don't have that problem. Even a mediocre AI as a manager might win on speed and coordination. How long until an AI-run company dominates an industry?
- Related to this is "machines should think, people should work." Watch this video of an Amazon fulfillment center.[2] All the thinking is done by computers. The humans are just hands.
It's hard for humans to operate on more than 7 objects at the same time - a limitation of working memory. So naturally there are simple management and planning tasks that benefit from computers ability to track more objects.
The real AI risk isn't an all-powerful savant which misinterprets a command to "make everyone on Earth happy" and destroys the Earth. It's a military AI that correctly interprets a command to kill a particular group of people, so effectively that its masters start thinking about the next group, and the next. It's smart factories that create a vast chasm between a new, tiny Hyperclass and the destitute masses... AI is hardly the only technology powerful enough to turn dangerous people into existential threats. We already have nuclear weapons, which like almost everything else are always getting cheaper to produce. Income inequality is already rising at a breathtaking pace. The internet has given birth to history's most powerful surveillance system and tools of propaganda.
The only thing which keeps you alive is the unwillingness of your neighbors and those who surround you to kill you. The law might punish them afterward, but extensive research has shown that it provides no disuasion to people who are actually willing to kill someone.
A military AI being used to wipe out large numbers of people is exactly as 'inevitable' as the weapons we already have being used to wipe out large numbers of people. The exact same people will be making the decisions and setting the goals. In that scenario, the AI is nothing but a fancy new gun, and I don't see any reason to think it would be used differently in most cases. With drones we have seen the CIA, a civilian intelligence agency, waging war on other nations without any legal basis, but that's primarily a political issue and the fact that it can be done in pure cowardice, without risking the life of those pulling the trigger, which I think is a distinct problem from AI.
To be fair, it's a small step from effective AI that doesn't malfunction, to an AI over which humans have lost control. It's precisely one vaguely specified command away in fact, and humans are quite excellent at being imprecise.
Deleted Comment
Anyone power that has significant leverage over another power already has that ability.
A bunch of 'super smart evil AI robots' will not be able to physical deter/control 500 million Europeans - but - a small Army of them would be enough to control the powers that be, and from there on in it trickles down.
Much the same way the Soviets controlled Poland et. al. with only small installations. The 'legitimate threat of violent domination' is all that is needed.
So - many countries already have the power to do those things to many, many others via conventional weapons and highly trained soldiers. That risk is already there. Think about it: a decent soldier today is already pretty much a 'better weapon' than AI will be for a very, very long time. And it' not that hard to make decent soldiers.
The risk for 'evil AI robots' is that a non-state, inauthentic actor - like a terrorist group, militia etc. gets control of enough of them to do project power.
The other risk I think, is that given the lack of bloodshed, states may employ them without fear of political repercussions at home. We see this with drones. If Obama had to do a 'seal team 6' for every drone strike, many, many of those guys would have died, and people coming home on body bags wears on the population. Eventually the war-fever fades and they want out.
Deleted Comment
Dead Comment
Whatever goals the AI has, it will certainly be better at achieving them if it can stay alive. And it will be more likely to stay alive if there are no humans around to interfere. Now you might say, why don't we just hardcode in a goal to the AI like "solve aging, and also don't hurt anyone"? And ensure that the AI's method of achieving its goals won't have terrible unintended consequences? Oh, and the AI's goals can't change? This is called the AI control problem, and nobody's been able to solve it yet. It's hard to come up with good goals for the AI. It's hard to translate those goals into math. It's hard to prevent the AI from misinterpreting or modifying its own goals. It's hard to work on AI safety when you don't know what the first strong AI will look like. It's hard to prove with 99.999% certainty that your safety measures will work when you can't test them.
Things will not turn out okay if the first organization to develop strong AI is not extremely concerned about AI risk, because the default is to get AI control wrong, the same way the default is for planets to not support life.
My counterpoint to the risks of more limited AI is that limited AI doesn't sound as scary when you rename it statistical software, and probably won't have effects much larger in magnitude than the effects of all other kinds of technology combined. Limited AI already does make militaries more effective, but most of the problem comes from the fact that these militaries exist, not from the AI. It's hard for me to imagine an AI carrying out a military operation without much human intervention that wouldn't pose a control problem.
--------- Edited in response to comment--------
These are not bad arguments you are making, or hard ones to get behind. There are just added layers of complexity that the author would like us to think about. Things like how we could actually 'hard-code' a limit or a governor on certain types of motivation. Or what 'motivation' is even driven by at all.
I think you'll enjoy the originally linked article. It's got a lot to consider.
This is a sequence of deductive reasoning that you brought up there. Quite natural for human beings, but why would the paperclip maximiser be equipped with it?
Seriously, the talks specifically argues most of the points that you brought up.
Shit is complicated yo. Complicated like the world is - not complicated like an algorithm is. Those are entirely different dimensions of complicated that are in fact incomparable.
This scenario I agree with. For instance: the AI decides that it doesn't want to live on this planet and consumes our star for energy, or exploits our natural resources leaving us with none.
The whole AI war scenario is highly unlikely. As per the article, the opponents of AI are all regarded as prime examples of human intelligence - many of them have voiced opposition to war and poverty (by virtue of being philanthropists). Surely something more intelligent than humans would be even less inclined to wage war. Furthermore, every argument against AI posits that humans are far more important than they really are. How much time of day do you spend thinking about bacteria in the Marianas Trench?
> AI control
My argument is with exception to this scenario. By attaching human constraints to AI, you are intrinsically attaching human ideologies to it. This may limit the reach of the superintelligence - which means that we create a machine that is better at participating in human-level intelligence than humans are. Put simply, we'd plausibly create an AI rendition of Trump.
Why would this ever apply? We're building them, not picking them out of a hat.
Deleted Comment
You know, you don't need to go that far. You know what a great way to kill a particular group of people is? Well, let's take a look at what a group of human military officers decided to do (quoting from a paper of Elizabeth Anscombe's, discussing various logics of action and deliberation):
""" Kenny's system allows many natural moves, but does not allow the inference from "Kill everyone!" to "Kill Jones!". It has been blamed for having an inference from "Kill Jones!" to "Kill everyone!" but this is not so absurd as it may seem. It may be decided to kill everyone in a certain place in order to get the particular people that one one wants. The British, for example, wanted to destroy some German soldiers on a Dutch island in the Second World War, and chose to accomplish this by bombing the dykes and drowning everybody. (The Dutch were their allies.) """
There's a footnote:
""" Alf Ross shews some innocence when he dismisses Kenny’s idea: ‘From plan B (to prevent overpopulation) we may infer plan A (to kill half the population) but the inference is hardly of any practical interest.’ We hope it may not be. """
It's not an ineffective plan.
Machine learning advances are predicated on the internet, will grow the internet and will become what we already ought to know we are. A globe spanning hyper intelligence working to make more intelligence at break neck pace.
Somewhere along this accelerating continuum of intelligence, we need to consciously decide to make thing awesome. So people aim to build competent self driving cars, that way fewer people die of drunk driving or boredom. Let's keep trying. Keep trying to give without thought of getting something in return. Try to make the world you want to live in. Take a stand against things that are harmful to your body (in the large sense and small sense) and your character. Live long and prosper!!!
And in an almost miraculous result, we've managed not to annihilate each other with them so far.
> Income inequality is already rising at a breathtaking pace.
In the US, yes, but inequality is lessening globally.
> The internet has given birth to history's most powerful surveillance system and tools of propaganda.
It has also given birth to a lot of good things, some that are mentioned in a sibling comment.
This has been done many times by human-run militaries; would AI make it worse somehow?
Groups of humans acting collectively can look a lot like an "AI" from the right perspective. Corporations focused on optimizing their profit spend a huge amount of collective intelligence to make this single number go up, often at the expense of the rest of society.
Solders in developed countries no longer want to die en masse.
That's why I found the "outside arguments" here equally important and compelling.
> The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.
If it sounds like and acts like a cult, why should we treat it any differently from a cult? Even if the people in it are all very smart, wealthy, well-dressed, and appear very rational, they're still preaching the end of the world on a certain date. All of those groups have only one thing in common: they're all wrong.
The best rebuttals to all this are the least engaging.
"Dude, are you telling me you want to build Skynet?"
That's not to say that his general premise is wrong, but it's hard for me to take it seriously when his rebuttals are this weak.
Occasionally the crazies are right. Remember when the idea that the NSA was recording everyone's emails was paranoid conspiracy theory talk, except turns out they were actually doing this the whole time.
The fact that the world hasn't ended tells us virtually nothing about how likely the end of the world is, for the simple reason that if the world had ended we wouldn't be here to talk about it. So we can't take it as evidence, at least, not conclusive evidence. Note also that the same argument works just as well against global warming as it does against AI risk.
Turn it around. Suppose there was a genuinely real risk of destroying the world. How would you tell? How would you distinguish between the groups that had spotted the real danger and the run-of-the-mill end-of-the-world cults?
I think that's beside the point. Why would the "aristocracy" owning the factories care about money, when they have all the goods (or can trade for them with other aristos)?
It's not like they need money to pay other people (the destitute masses are useless to them). With their only inherent "capital", --the ability to work-- made worthless by automation, the destitute masses have no recourse-- they get slowly extinguished, until only a tiny fraction of humanity is left.
If you as a manufacturer move to a jobless production system, you gain net margin.
If everybody moves to jobless production, the topline demand shrinks radically.
Yet, for each individual mfg, the optimal choice is jobless production (aka, "loot the commons", aka "defect").
- a military AI in the hands of bad actors that does bad stuff with it intentionally.
- a badly coded runaway AI that destroys earth.
These two failure modes are not mutually exclusive. When nukes were first developed, the physicists thought there is a small but plausible chance, around 1%, that detonating a nuke would ignite the air and blow up the whole world.
Let's imagine we live in a world where they're right. Let's suppose somebody comes around and says, "let's ignore the smelly and badly dressed and megalomanic physicists and their mumbo jumbo, the real problem is if a terrorist gets their hands on one of these and blows up a city."
Well, yes, that would be a problem. But the other thing is also a problem. And it would kill a lot more people.
I make lots of money right now completely via the internet and I am not even breaking laws. It is just probable that an AI at our present level of intelligence could very quickly amass a fortune and leverage it to control everything that matters without humanity even being aware of the changeover.
Ultimately, AI is neither necessary nor sufficient for the powerful to kill the less powerful.
And if it's powerful people trying to build hard military AI, they probably aren't reading LessWrong to understand how to ensure their AI plays nice anyway.
http://voxeu.org/article/parametric-estimations-world-distri...
superintelligence of a military AI is worrisome, but superintelligence of a cantankerous thinking is quite reassuring...
Kind of like what we're already seeing now in Courts, and kind of how NSA and CIA's own algorithms for assigning a target are still far less than 99% accurate.
This is trivially false. Over a hundred billionaires have now pledged to donate the majority of their wealth, and the list includes many tech people like Bill Gates, Larry Ellison, Mark Zuckerberg, Elon Musk, Dustin Moskovitz, Pierre Omidyar, Gordon Moore, Tim Cook, Vinod Khosla, etc, etc.
https://en.wikipedia.org/wiki/The_Giving_Pledge
Google has a specific page for its charity efforts in the Bay Area: https://www.google.org/local-giving/bay-area/
This only includes purely non-profit activity; it doesn't count how eg. cellphones, a for-profit industry, have dramatically improved the lives of the poor.
Also, some of these billionaires are running companies that are great at tax avoidance, probably most of them. Now what? They get to pick and choose where they get to spend there/invest money? I don't buy it.
I believe in wealth, just not this radical wealth separation .
If you raise the marginal tax rate to 99%, then you get rid of billionaires, but you also kill your economy. There are all the failures of communist countries, of course, but even the UK tried this during the 60s and 70s. The government went bankrupt and had to be bailed out by the IMF. Inflation peaked at 27%, unemployment was through the roof, etc.:
https://en.wikipedia.org/wiki/1976_IMF_Crisis
https://en.wikipedia.org/wiki/Winter_of_Discontent
What the hell have you done for society?
This story is very similar for most billionaires. They create A LOT of jobs and careers.
Deleted Comment
Source: https://en.wikipedia.org/wiki/List_of_U.S._states_by_poverty...
http://www.forbes.com/sites/chuckdevore/2016/09/28/why-does-...
https://en.wikipedia.org/wiki/Thank_God_for_Mississippi
Dead Comment
A question people don't ask enough is: given that we give vast trillions of dollars to the government, most of which is spent on various kinds of social programs (health care, education, social security, etc), why is there STILL so much poverty, joblessness, homelessness, drug use, crime, and other kinds of suffering in the US?
wow, they're awesome.
It seems possible to me that the technology to turn money into less poverty not only doesn't exist, but that the social structures that make men like Bill Gates rich also make it difficult to create such technology.
Your implicit argument is that todays rich somehow care more about improving society than yesterday's did, which will cause these concentrations of wealth to lead to a different outcome. I'm not sure I see much of a difference between Gates and Carnegie. Different ideas about what the world needs, but not a particularly different approach to capital.
And how are the actions of a few billionaires relevant to what the industry does as a whole? Does Google, Facebook, or, God forbid, Uber, address the problems of poverty and inequality (which are separate problems) as a company?
To a very large extent, charity is irrelevant; charity is a way of buying oneself a conscience without actually changing anything in the world; without even addressing the problems or thinking about them.
The Giving Pledge requires that the money be given to philanthropy, which may improve the lives of others around the world, rather than Californians.
Which is not to say they don't do some good work with it, the Bill and Melinda Gates foundation has done some great work fighting malaria and bringing fresh water to poor communities.
But these same foundations also do a lot of other work, like furthering charter schools which benefits wealthy families to the detriment of poor communities here in the US.
If setting up a foundation that actually helps a ton of people means that their families can get a fraction of that money back, that's fine with me...
Unless you are implying that they are getting more money back through the salaries than they are in donating. In which case i'd love to see a source.
I suppose the poor are no longer subject to long-distance fees during daytime.
There's tons of stuff on this, but, eg., here's a poster from USAID:
https://s-media-cache-ak0.pinimg.com/originals/09/35/2d/0935...
http://www.ictworks.org/2016/06/27/yes-farmers-do-use-mobile...
Etc. And that's just easily measured benefits. There's good reasons why more people in Africa have access to cell phones than to clean water.
"These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult. Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking. The outside view doesn't care about content, it sees the form and the context, and it doesn't look good."
The problem with argument ad hominem isn't that it never works. It often leads to the correct conclusion, as in the cult case. But the cases where it doesn't work can be really, really important. 99.9% of 26-year-olds working random jobs inventing theories about time travel are cranks, but if the rule you use is "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
That's because it's very effective in practice.
In the real world (which is not a pure game of logical reasoning only played by equals and fully intelligent beings without hidden agendas), the argument ad hominem can be a very powerful way to cut through BS, even if you can't explain why they are BS by pure reason alone.
E.g. say a person A with IQ 110 talks with a person B of IQ 140. The second person makes a very convincing argument for why the first person should do something for them. Logically it is faultless as far as person A can see. But if the person A knows that person B is shady, has fooled people in the past, has this or that private interest in the thing happening, etc, then he might use an "argument ad hominem" to reject B's proposal. And he would be better of for it.
The "argument ad hominem" is even more useful in another very common scenario: when we don't have time to evaluate every argument we hear, but we know some basic facts about the person making the argument. The "argument ad hominem" helps us short out potentially seedy, exploitative, etc. arguments fast.
Sure, it also gives false negatives, but empirically a lot of people have found that it gives more true negatives/positives (that is, if they want to act on something someone says, without delving into it finely, the fastest effective criterion would be to go with whether they trust the person).
This is not only because we don't have the time to fully analyze all arguments/proposals/etc we hear and need to find some shortcuts (even if they are imperfect), but also because we don't have all the details to make our decisions (even if we have a comprehensive argument from the other person, there can be tons of stuff left out that will also be needed to evaluate it).
Cranks lack formal education and spew forth the gobbledygook in reams.
No you don't, you just don't catch it right away, relativity actually holds up under scrutiny. Besides, I reject the premise anyway.
Einstein did serious work on the photoelectric effect first and then gradually worked towards relativity. Outside of the pop history he had very little in common with cranks. This is basically the result you end up seeing when you look into any of these examples used to try and argue against the ability to pattern match cults and cranks, the so-called false negatives never (to my knowledge) actually match the profile. Only the fairy tale built around their success matches it.
So it is with cult-ish behaviour as well, these are specific human failures that self-reinforce and while some of their behaviour occurs in successful people (especially successful and mentally ill or far from neuro-typical people) there is a core of destructive and unique behaviour evident in both that you absolutely should recognize and avoid. Not just the statistical argument that you will gain much more than you lose by avoiding it, it's that it is staggeringly improbably that you will lose anything.
Having established that people evaluate things these two ways, the author then says, "I will demonstrate to both of these ways of thinking that AI fears are bogus."
And yet I suspect that many of the people swayed by one application of the argument won't be swayed by the other and vice versa. Interesting, isn't it?
Meanwhile, rewind ~100 years, and suppose you ignored the luminiferous aether. Or suppose you straight away saw Einstein was a genius? Oh, wait... nobody cares. Because you're dead.
So I'm not sure what the long-term problem is here.
Meanwhile, you, personally, can probably safely ignore people that appear to be cranks.
But in this case, if they're right then we're about to wipe out humanity. That's not safe to ignore.
"What kind of person does sincerely believing this stuff turn you into? The answer is not pretty.
"I'd like to talk for a while about the outside arguments that should make you leery of becoming an AI weenie. These are the arguments about what effect AI obsession has on our industry and culture:..."
...grandiosity, megalomania, avoidance of actual current problems. Aside from whether the superintelligence problem is real, those believing it is seem less than appealing.
"This business about saving all of future humanity [from AI] is a cop-out. We had the same exact arguments used against us under communism, to explain why everything was always broken and people couldn't have a basic level of material comfort."
What nonsense. None of the credible people suggesting that superintelligence has risk are spouting generic arguments that apply to communism or any previous situation.
The question is not IF humanity will be replaced but WHEN and HOW.
Clearly, in a world with superintelligence growing at a technological pace, instead of evolutionary pace, natural humanity will not remain dominant for long.
So it makes enormous sense to worry about:
* Whether that transition will be catastrophic or peaceful.
* Whether it happens suddenly in 50 years or in a managed form over the next century.
* Whether the transition includes everyone, a few people, or none of us.
Hawking, Musk, et. al. are highly successful people with objectively valuable contributions who are known to be about to think deeply and solve problems that others have not.
They are as far from cranks as anyone could possibly be.
Anyone can find non-argument related reasons to suggest anyone else is crazy or a cultist, because no human is completely sane.
What someone cannot do (credibly), is claim that real experts are non-expert crazies, over appearances while completely ignoring their earned credentials.
You immediately self-contradicted here. If a decision heuristic often leads to the correct conclusion, then it's a good heuristic (if you are under time constraints, which we are).
Of course, given unlimited time to think about it, we would never use ad hominem reasoning and consider each and every argument fully. But there are tens of thousands of cults across the world, each insisting that they possess the Ultimate Truth, and that you can have it if you spend years studying their doctrine. Are you carefully evaluating each and every one to give them a fair shake? Of course not. Even if you wanted to, there is not enough time in a human lifespan. You must apply pattern-matching. The argument being made here isn't really an ad hominem, it's more like "The reason AI risk-ers strongly resemble cults is because they functionally are one, with the same problems, and so your pattern-matching algorithm is correct". Note that the remainder of the talk is spend backing up this assertion.
There's a good discussion of this in the linked article about "learned epistemic helplessness" (and cheers to idlewords for the cheeky rhetorical judo of using Scott Alexander and LW-y phrases like "memetic hazard" in an argument against AI risk), but what it boils down to is that our cognitive shortcuts evolved for a reason. Sometimes the reason is because of our ancestral environment and no longer applies, but that is not always true. When you focus solely on their failure cases, you miss sight of how often they get things right... like protecting you from cults with the funny robes.
> "if they look like a crank, ignore everything they say", then you miss special relativity (and later general relativity).
A lot of people did ignore Einstein until the transit of Mercury provided the first empirical evidence for relativity, and they were arguably right to do so.
A heuristic like "it's risky to rent a car to a male under 25" saves a lot of cost in terms of making the decision (background checks, accurately assessing the potential renter's driving skills and attitude towards safety, etc.) and has minimal downside (you only lose a small fraction of potential customers) and so it's a good heuristic.
A heuristic like "a 26-year-old working a clerical job who makes novel statements about the fundamental nature of reality is probably wrong" does reduce the decision cost (you don't have to analyze their statements) but it has a huge downside if you're wrong (you miss out on important insights which allow a wide range of new technologies to be developed). So even though it's a generally accurate heuristic, the cost of false negatives means that it's not a good one.
So, is it a good heuristic to conclude that since crime is related to poverty and minorities tend to be poor, minorities qua minorities ought to be shunned?
Uh ... https://en.wikipedia.org/wiki/History_of_special_relativity#...
This seems, actually, like a perfect argument going in the other direction. Every day, millions of people put cats into boxes, despite the cats not being interested. If you offered to pay a normal, reasonably competent person $1,000 to get a reluctant cat in a box, do you really think they simply would not be able to do it? Heck, humans manage to keep tigers in zoos, where millions of people see them every year, with a tiny serious injury rate, even though tigers are aggressive and predatory by default and can trivially overpower humans.
If you're the first human on an island full of tigers, you're not going to end up as the Tiger King.
One way to get a cat into a carrier - well, the catfood industry created those funny little dry food pellets that are somehow super-addictive. Shake the box, my cat will come. Drop one in the carrier, it surely will enter. Will it eventually adapt to the trick? Maybe, but not likely if I also do this occasionally without closing the carrier door behind the cat.
Yes, we can outsmart the cat. Cats are funny because they do silly, unpredictable things at random, not because they can't be reliably tricked.
I also think this isn't a very good analogy. In this case we're talking about manipulating humans, where we already know manipulation is eminently possible.
Heck it wouldn't even need psychological manipulation. Hack a Bitcoin exchange or find another way of making money on the internet, then it can just pay someone absurd sums to do what it wants.
To become the President, you need only overcome a thousand Florida voters.
To intern the Japanese, you need only overcome two members of SCOTUS (Korematsu v. US)
It isn't necessary for Hawking to be able to trick the average cat into a box. It's sufficient to trick a handful of cats in total.
How well an AI could survive on a desert island is an irrelevant question when Amazon, Google and dozens of others are already running fully (or as near as makes no difference) automated datacentres controlled by a codebase that still has parts written in C. Hawking can easily get the cat in the container: all he has to do is submit a job to taskrabbit.
And Hawking would just ask his assistant to put the cat into the box. You are artificially depriving him of his actual resources to make a weak point.
The first time Random Human A attempts to get Random Cat B into a box, they're going to have a hard time. They'll get there eventually, but they'll be coughing from the dust under the bed, swearing from having to lift the bloody sofa up, and probably have some scratches from after they managed to scare the cat enough for it to try attacking.
However, speaking as a cat owner, if you've iterated on the problem a dozen or so times, Cat B is going in the box swiftly and effortlessly. Last time I put my cat in its box, it took about 3 minutes. Trying for the bed? Sorry, door's closed. Under the sofa? Not going to work. Trying to scratch me? Turns out cat elbows work the same way as human elbows.
The same surely applies to a superintelligent AI?
(Likewise with the Navy Seal On The Island Of The Tigers. Just drop one SEAL in there with no warning? He's screwed. Give a SEAL unit a year to plan, access to world experts on all aspects of Panthera tigris, and really, really good simulators (or other iteration method) to train in? Likely a different story. )
http://www-history.mcs.st-and.ac.uk/Extras/Spitzer_lion.html
This is the one assumption I most skeptical of. In my experience, each time you make a system more clever, you also make it MUCH more complex. Maybe there is not hard limit on intelligence, but maybe each generation of improved intelligence actually takes longer to find the next generation, due to the rapidly ramping difficulty of the problem.
I think people see the exponential-looking growth of technology over human history, and just kinda interpolate or something.
Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.
You're equating speed with quality. There's no reason to assume that. Do you think an AI will be better at catching a fieldmouse than a falcon? Do you think the falcon is limited by speed of thought? Many forms of intelligence are limited by game theory, not raw speed. The challenge isn't extracting large quantities of information, it's knowing which information is relevant to your ends. And that knowledge is just as limited by the number of opportunities for interaction as the availability of analytic resources.
Think of it this way: most animals could trivially add more neurons. There's plenty of outliers who got a shot, but bigger brainded individuals obviously hit diminishing returns, otherwise the population would've shifted already.
Trust me, if I were to be wired directly to the Internet and had some well-defined goals, I'd be much more effective at it than any of us here - possibly any of us here combined. Because as a human, I have to deal with stupid shit like social considerations, random anxiety attacks, the drive to mate, the drive of curiosity, etc. Focus is a powerful force.
You're equalling human time to AI/computer time. A one day old neural net has already experienced multiple lifetimes worth of images before it is able to beat you at image recognition. It's not trivial, but we just gloss over the extremely complex training phase because it runs on a different clock speed than us.
Deleted Comment
>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.
Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.
Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.
Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.
Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.
To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.
This isn't just an unreflective assumption. The argument is laid out in much more detail in "The Basic AI Drives" (Omohundro 2008, https://selfawaresystems.files.wordpress.com/2008/01/ai_driv...), which is expanded on in a 2012 paper (http://www.nickbostrom.com/superintelligentwill.pdf).
But it only takes one intelligent agent that wants to self-improve for the scary thing to happen.
If you knew an asteroid would hit the earth 1 year from now, and you had the ability to push a button and become 100,000x smarter, I would hope your values would lead you to push the button because it gives you the best chance of saving the world.
Only if all sorts of other conditions (several of which are mentioned in the post) also apply. Merely "wanting to self-improve" is not enough.
- AI as management. Already, there is at least one hedge fund with an AI on the board, with a vote on investments.[1] At the bottom end, there are systems which act as low-level managers and order people around. That's how Uber works. A fundamental problem with management is that communication is slow and managers are bandwidth-limited. Computers don't have that problem. Even a mediocre AI as a manager might win on speed and coordination. How long until an AI-run company dominates an industry?
- Related to this is "machines should think, people should work." Watch this video of an Amazon fulfillment center.[2] All the thinking is done by computers. The humans are just hands.
[1] http://www.businessinsider.com/vital-named-to-board-2014-5 [2] https://vimeo.com/113374910
Not for long. Robots will be cheaper soon.
> All the thinking is done by computers.
It's hard for humans to operate on more than 7 objects at the same time - a limitation of working memory. So naturally there are simple management and planning tasks that benefit from computers ability to track more objects.