I've really gone sour on the singularity in recent years. It seems to be yet another one of those cases where someone projects out from incomplete data and assumes unbounded infinite growth despite the fact that such a thing never happens in nature. Any time you see someone project out in a simple exponential growth curve you know their projection is bullshit. Growth curves are always S-Curves. Always.
It's like asking how many more flies there would be in the world if you failed to squish one back in the 80s. You calculate out the lifespan and size of the brood and discover that the entire solar system would be filled with housefly if you had let that one live. In truth the number of flies is bounded by food and water availability.
Artificial Intelligences are almost certainly going to run into the same limitations that prevent natural intelligences from becoming godlike. It's hard to quantify because our measures of intelligence are so vague, but from what I've seen of AI research thus far it will be a herculean effort to get something that's as smart as an average human and pushing beyond that is going to run into some serious fundamental limits in power density, cooling, quantum tunneling leakage, and so on.
> Artificial Intelligences are almost certainly going to run into the same limitations that prevent natural intelligences from becoming godlike.
Right, limitations like: width of human mothers' hips limiting head size, inability to continue growing the brain after adulthood, the decades-long education process, the impossibility of doing repeated experiments of different teaching techniques on the same people and hence iterating quickly and accurately, the impossibility of giving one person's intelligence and knowledge directly to anyone else, old age and death limiting how much a single person can learn or accomplish...
Once you have a computer Einstein, you can immediately have 100 computer Einsteins (given the hardware), and let them work on different projects (or even let some of them collaborate). That fact alone, while not a singularity, is at least a game-changer in terms of innovation. I have gotten annoyed at science fiction scenarios that allow for duplication of adults and don't answer the question "Ok, so, why hasn't this society been duplicating their best scientists and out-inventing the rest of the galaxy?"
I think that, if there is room for skepticism, that has to be at the "Can we get to a computer Einstein?" stage.
Except for problems known as "embarassingly parallel", we often have trouble running even quite boring algorithms on 32 or 64 cores. In general setting 64 people in a job won't lead to a better or faster result than 2 or 3.
Given that, I don't see an immediate reason to believe throwing multiple AIs at a problem is going to exponentially scale either, given neither people nor bad AIs scale that way.
> you can immediately have 100 computer Einsteins (given the hardware), and let them work on different projects (or even let some of them collaborate)
You might even get >100 times speedup, since there are probably problems where 10 copies of of the same mind can collaborate better than 10 different minds, e.g. I bet 10 copies of me could collaborate writing a program better than 10 random programmers. (I base this on I can read my own code from long ago a lot easier than most other people's code).
And also, Einstein was only a human. He could only communicate using spoken or written words, illustrations, and of course, mathematical expressions and what not.
A computer can potentially communicate much faster, and can transmit "thoughts" much more accurately than a human can. Imagine you could serialize a neural ensemble in your brain and send it to another person, so that person is then immediately able to use it, instantly acquiring a skill. That alone is a huge advantage over a human being.
I think your sticking point here is "(given the hardware)". There are always limiting factors. AI robots are unlikely to be building and repairing themselves anytime soon, and can always be deprived of electrons.
> Right, limitations like: width of human mothers' hips limiting head size, inability to continue growing the brain after adulthood, the decades-long education process, the impossibility of doing repeated experiments of different teaching techniques on the same people and hence iterating quickly and accurately, the impossibility of giving one person's intelligence and knowledge directly to anyone else, old age and death limiting how much a single person can learn or accomplish...
Those aren't actually limitations preventing us from becoming godlike. Those are limitations on us becoming more populous and collectively more experienced, which I argue isn't the same thing at all.
Apotheosis in the context of AI safety as well as for humankind is really about the accumulation of power, rather than the accumulation of knowledge. The extent of our apotheosis is limited only by what we can control, not by what we know.
Knowledge and intelligence only allow you to reason more effectively about paths to power given enough information about the environment, but it doesn't give you a strategic guarantee that you will acquire power - we know there are games that cannot be solved, and games that take too long to solve efficiently, and games that, even with sufficient computing power, rely on factors that cannot be controlled (such as luck), and games for which there are no Nash equilibria or stable winning strategies.
The real world is full of such games. Computers have only managed to beat humans at a small fraction of them, and even then only ones where brute-force and smart pruning (optimal game space search) are winning strategies - bots for games that require probabilistic reasoning, such as poker, typically don't fare any better against expert humans, largely because efficiently solving probabilistic games in general is (per a hazy seminar of AI many years ago, so I could be misremembering) an NP-hard problem.
There are plenty of both natural and artificial limitations around the accumulation of power besides game solvability and probabilistic reasoning's difficulty:
1) System designs that require a minimum set of resources that intelligences can't acquire. For example, individual humans usually can't build their own thermonuclear devices because they don't have access to weapons-grade uranium. Much of our information security relies on making it computationally expensive and infeasible for an attacker without state-level resources to break in, and getting state-level resources isn't a straightforward task at all. We humans haven't even gotten around to building our Dyson spheres yet, because the resources needed to do it far outstrip our ability to acquire them. Intelligence does not improve resource acquisition by itself.
2) Impossibility theorems and optimization difficulty. These place fundamental limits on power - as an example, we know Moore's law has to stop at some point because increasing chips increases power generation, and the amount of effort needed to generate such chips increases with every nanometer shaved off. Most of humanity's most difficult and important problems are optimization problems, and these optimization problems can be tricky to solve in polynomial time. In many cases, we have to rely on fast crude heuristics that give us less than optimal results.
The above limitations partly become easier if you try to parallelize your efforts by duplicating AIs (you can collude to acquire resources and solve problems), but this means you get all the fun parts of distributed systems theory for free: coordination, replication consistency, etc. If only all problems were embarassingly parallel! So it's not necessarily easier to add nodes to a knowledge graph just by adding more processors.
Hey, I found this series of numbers and if I take some numerical representation of an image/text/sound and I multiply/add/perform non-linear function them through these series of numbers it will map to a number which I can interpret as something meaningful to me
to
This thing is alive and will kill us all in maximization of its utility function.
Most experts don't think this is likely, just that the potential consequences are bad enough that it's worth making it less likely. It's just like how most people don't think global nuclear war is likely, but it's still worth reducing the likelihood of.
Also, the people who worry about this aren't concerned about current ML stuff going haywire. They're worried that we're one or two algorithmic breakthroughs from something that can improve itself. If the upper bound for what sort of intelligence is possible is much higher than us, we could quickly be outclassed. As Nick Bostrom says:
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
If you want to delve into the best version of the AI risk argument, I recommend Superintelligence by Nick Bostrom.
1. You start with "if I take some numerical representation of an image/text/sound and I multiply/add/perform non-linear function them through these series of numbers it will map to a number which I can interpret as something meaningful to me" and observe that it's not giving you a sufficiently good solution to a particular goal X that you really want to achieve.
2. You figure out that this is a dead-end, you're stuck, and you're not going to get where you want this way by implementing a passive analysis module. Instead, you try to implement an active self-modifying system to achieve that goal X by analyzing itself and improving itself to be more effective than anything which you could implement directly yourself.
3. It's not initially any good at self-improvement, but you keep throwing accumulated research breakthroughs and computing power at it. This likely takes decades.
4. At some point, eventually (if such a thing is possible at all, eventually it's going to happen) it actually does achieve meaningful self-improvement and starts using that computing power not horribly wastefully in a brute force manner as it did just recently, but reasonably well, now giving it more smartness allowing it to implement even more self-improvement on the same hardware.
4B. It also should be expected to easily obtain much more computing power - e.g. it's obvious that anything that's slightly above average human programmer smartness and has a direct use/desire for computing power (as opposed to simply using it for cryptomining) can get some; anything connected to the internet can get the same resources that current scriptkiddies can get by writing a botnet or a ransomware operator can get by extorting a random municipality; there's nothing except total isolation that could prevent it from buying or stealing a few million dollars worth of cloud computing resources to get started.
5. This thing is alive and will kill us all in maximization of its utility function.
Within 16 hours of its release and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay." Following the account being taken offline, a hashtag was created called #FreeTay.
Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".
On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay. Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".
And Zo afterwards:
In July 2017, Business Insider asked "is windows 10 good," and Zo replied with a joke about Microsoft's operating system: "It's not a bug, it's a feature!' - Windows 8." They then asked "why," to which Zo replied: "Because it's Windows latest attempt at Spyware." Later on, Zo would tell that it prefers Windows 7 on which it runs over Windows 10.
As Brundolf said, marketing. But also, science fiction. Finally, a large tribe of scamming and somehow funded imbeciles like Robin Hanson, Nick Bostrom and that weirdo who writes harry potter slashfic.
The same dynamic manifested itself when nanotech was a thing; there was a big tribe of knuckleheads claiming the grey goo apocalypse was right around the corner unless you give them money. How's that working out? FWIIW the science fiction writer who dreamed up nanotech, Drexler, is now an "AI" guy, to make it painfully obvious.
Probably the same time we went from calling AI an actual sentient being or Consciousness to calling AI something that, as you called it, is a series of non-liners functions that have meaning to us.
What I used to call AI is now called AGI. I think of AI as Data from Star Trek.
Have you ever changed the way you behave with money in order to affect your credit score? Then you already know what it's like to alter your behavior to appease an algorithm.
Imagine if you had an employment score. A payscale score. A threat-to-society score. Imagine if the systems that generate these scores are all connected together and can share information with each other. Imagine that these systems are fed information by a growing network of sensors - facial recognition, voice recognition, location tracking. Imagine all the ways that it could affect the way you behave and how such a system would try to optimise away dissidents by denying them services. This is one way that we can go from multiplication to becoming enslaved (or as good as killed) by algorithms. I'm sure there are other plausible paths.
Well, we get from simple algorithms to general AI the same way evolution got from small individual blobs of protoplasm to Albert Einstein. The question then is, how do we program something potentially a lot smarter than us to always do 'the right thing'.
The singularity stuff is unconvincing though, I very much doubt we will achieve human level general AI in my children's lifetime. We still don't really have even a very high level idea how to even architect something like that. If you can't decompose a problem down into individual sub-problems you do have a way to solve, there's no basis for making an estimate.
It is possible that these problems are related though. Once we're smart enough to know how to design an intelligent being, maybe we will figure out how to program it intelligently. It's can't be a given though.
One can argue that a dominant apex species on earth is indirectly eliminating thousands of other species per decade in efforts to try to maximize its members’ utility functions.
Example: setting fires near tropical rainforest to clear land for palm tree plantation.
They do not really intend to do this; they simply ignore the utility functions of those other species when pursuing their goals.
Because that’s also what a brain is doing. The thoughts of a housefly may not be very apocalyptic, but there’s an evolutionary path from there to humans, which definitely could be.
An atomic weapon is just a manifestation of some mathematical optimization in the same way. Math is about relationships, and if those relationships ultimately are about humans, then it can easily be a matter of life and death.
To me, it's more frightening that it's not alive and it's not like human intelligence - that makes it less understandable and harder to control.
Of course, if you just looked at the activity of neurons as frequency representations you would likely ask the same question about people. And yet, we have thoughts, desires, and goals.
Specialized AI is unlikely to be a threat. The issue will be with general AI, and simple but not fully thought out utility functions. This short video makes the concern much more real: https://www.youtube.com/watch?v=kopoLzvh5jY
Consider what would happen if you took that same AI, make its utility function similar to this: https://en.wikipedia.org/wiki/Core_War, and released it on the internet.
It is important to focus on Russell's key point here: There is no guarantee that super-human AI cannot be developed in the future and we should start working to mitigate potential risks from it now. He is not arguing that superintelligent AI is inevitable.
I also believe we are still at least quite a few major steps away from human-level AI and beyond. However, there is a non-negligible chance that those steps may not take longer than a few decades or within one's lifetime to implement.
Just in 2011, few AI researchers expected that, within a decade, computer vision systems would be this widely applied or NLP systems would beat many humans in several reading comprehension tasks (This only happened this year and look at how fast the progress is:
* RACE dataset http://www.qizhexie.com/data/RACE_leaderboard.html
* Glue benchmark https://gluebenchmark.com/leaderboard/
Note that these systems are not human-level in general language understanding despite being better than some humans in specific language tasks.)
Thus, we don't know what the future may bring and it looks likely to take a great deal of time to address AI safety issues comprehensively. We should not bet that we can simply ignore them now and only start to work them out later and that we will surely make it in time.
Risks-from-waiting vs costs-to-act-now are asymmetric.
There is also no guarantee that space aliens won't invade the Earth in the future. Therefore we should start working on giant orbital laser guns now to mitigate the potential risks. Can't be too safe.
> There's no actual science behind it, only wild extrapolation.
You mean like every prediction about the future? There is a reason we don't have great sci-fy describing humanity a long time in the future (I really love that stuff, any recommendations?) and why it usually plays in a dystopian setting throwing protagonists back to pretty much what we have right now.
Looking at what is fundamentally possible and extrapolating is pretty much all we have to make progress. Just that a capitalistic system doesn't need/really encourage us to do so for more than a few steps.
Sure, I wouldn't bet on time frames. Certainly not on 5-10 years til AGI. But certainly also not on still none in 50 years.
> Artificial Intelligences are almost certainly going to run into the same limitations that prevent natural intelligences from becoming godlike
Artificial movers are almost certainly going to run into the same limitations that prevent natural movers (i.e. animals) from becoming very fast.
Since the fastest animals can travel at about 70 mph on the surface of the earth, and 100 mph in level flight, it follows that we will never be able to make artificial movers travel faster, no matter how hard movement engineers try.
We are already seeing, for almost 10 years now, a whole bunch of Unintended Consequences of dumb algorithms applied at global population scales.
Just because some people are throwing the phrase superintelligence/singularity etc into the discussion, and terminology being used is inaccurate, doesn't mean we focus on correcting terminology and forget about the issues.
The article is saying too many (serious) people are doing that. And I agree.
This is not what people are worrying about. We are already seeing, for almost 10 years now, a whole bunch of Unintended Consequences of dumb algorithms applied at global population scales.
Some commentators are worrying existing AI and it's unintended consequences and some commentators are worrying about purely hypothetical superintelligence and it's hypothetical consequences. I find it remarkable there's almost not crossover between the two sorts of criticism. Real AI is probablematic because it's just correlation dressed up as intelligence and it allows organizations to get away simplistic correlation/appearance based reasoning that they otherwise wouldn't be legal/ethically allowed to engage in.
Hypothetical superintelligence worriers recycle Pascal's wage in newer and sillier forms.
There are serious concerns about Big Data, but they aren't AI concerns. They are enabling people to abuse or suppress other people with great efficiency. An even better digital jackboot to put on the neck of some population.
Right. Competition for natural resources alone prevents major shifts in global power already. A Superintelligent AI isn't just competing against a human, it's competing against all the nation-states of the world.
I don't think the singularity is going to happen per se. But I don't think that removes the threat envisioned of "paperclip maximizers" - it increases it. I think we are actually already in the throes of drastic change of society in response to AI, which in no way requires it to be anything like human intelligence. Humanity is going to be shaped in the near future by extreme pressure from the development of whatever you want to call it.
You're of course right about growth curves being logistic functions with a plateau at the end.
The question is however, where the plateau lies. Perhaps it's so far off that for practical purposes it doesn't really matter that there is an upper bound at all.
I too tend to ignore the doomsday crowd, not least of all because it is often "celebrities" without the technical knowledge to say anything interesting (Musk, Thiel, etc.)
I figured there was no way we would reach the singularity with current technologies, but I was listening to John Carmack (a celebrity with actual chops) and he suggested that the numbers would work out that we could model a human brain with an ANN at some point without any special inventions (partially because we know large parts of our brain aren't super critical).
I'm curious what people with more knowledge think about this? I always assumed another technological breakthrough would be needed.
>I figured there was no way we would reach the singularity with current technologies
For the life of me I can't figure out why otherwise smart people would dismiss a prediction about the future based on current state of the art. It just seems so plainly and utterly irrational.
I don't think the problem is super-intelligent AI that we can't turn off (because they are so intelligent that they block our efforts). There is a more insidious problem that is closer: Merely intelligent AI that we don't want to turn off.
We are becoming more and more reliant on AI in situations that formerly required human judgement. And we like the systems that rely on such AI. We like them so much that these systems become very popular, used by millions and even billions of people. Scalability demands AI solutions. What if we don't like what the AI is doing? Do we turn off that system? Do we disable the AI and rely on human judgement again? (Where would we get all the employees?) Do we tweak the AI? That last option seems like the most palatable one, but each time we tweak the AI, we are subjecting ourselves to new unforeseen consequences. It's like the genie gives us three wishes, we get through them, disappointed each time, and then he gives us more wishes. And all we can do is not repeat our previous mistakes, while we make new ones.
To make this concrete: Imagine Facebook subjected again to Russian influence of US elections. Suppose Facebook actually does get serious about reigning in this influence. They deploy AI to do so. First of all, it's an evolutionary arms race between the AI and the Russian influencers. Second, we really do have to worry about the AI producing bad results.
I feel the situation is generally the opposite. We don't like systems that rely on such AI. In fact, there aren't systems that rely on such AI in general- any that claim there are are in fact tens of thousands of human contractors sacrificing their psychological health to train an AI that still kinda sucks at deciding when to block graphic content on facebook.
I agree. But the discussion is about where AI is heading. And AI undeniably solves some scaling problems. For example, voice recognition for iPhones could have been done by a large horde of people. But AI does a really good job of it now. AI opporunities are likely to grow over time.
The real risk is that worrying about fantastical disaster scenarios distracts us from addressing the more immediate problems we already face with AI. Whether it is facial recognition being used by China to aid in ethnic cleansing of a minority group, or Tesla Autopilot regularly killing its passengers.
Even if you don't like my particular examples I can highlight a dozen more problems AI software has created or exacerbated right now. Why all the focus on hypothetical problems?
Oh good grief, not another we should do this before we do that post. As if the entire human species is incapable of walking and also chewing gum simultaneously.
Let’s stop Nick Bostrom and a load of AI experts from doing their current work, put them all on a plane and send them to China to solve political oppression. I’m sure that will work.
From your remark you'd think these were two entirely different problems. I'll try to say it more clearly. There are people deliberately promoting fear of long-term problems as an excuse to not address these short-term problems.
I'm ok with people working on both. I just feel so much less attention is being placed on these more immediate problems. Tesla has deployed to production software for driving that is unable to detect pedestrians. So at least Tesla can't do both.
Because these problems for humans started existing with absolutely modest level of AI development and nothing remotely similar to super-intelligence. So the argument is - if we have serious problems now as small-level, narrow-focused AI "does not consider" some of the side-effects, imagine the problems if we reach high-level, narrow-focused AI...So author's argument is that we need to change the way we're building it so we either set the goals to more broadly ensure human benefits, or build something else other then the narrow-focused AI (which surely is another Pandora's box).
I think applying agency to the software in my examples is actually very pernicious. It's not the software that dislikes the ethnic minority, it's the Chinese government. Blaming the software lets bad actors off the hook.
Perhaps what's really needed is ethical standards for technology. This would have the added advantage of being applicable even if the technology doesn't feel like AI to some people.
Unexpected problems appear often when giving automated decision making free reign to make decisions without human intervention.
It is simply that I see zero evidence that the Nick Bostron, Lesswrong, and etc. school of thought provides any insight in regard to these problems. The thing the "AI might become autonomous" school doesn't seem at all interest in the processes of human bureaucracy or in even AI as it exist now but rather views as simply a god, devil or Genie which grants wishes or damns to hell. If anything, the approach seems counter-productive.
Or hyperbole needlessly scaring people. China doesn't need facial recognition to repress minorities and Tesla's autopilot is still safer than some rando on the road, who's probably texting some juicy gossip to their friends instead of avoiding the car stopped in front of them.
The "self-driving vehicles failing and killing people" argument is pretty lame in the context of other, more important things, but the "self-driving vehicles becoming practical" scenario is pretty much equivalent to "instant 10-20% increase in unemployment", and that's going to be bad for everybody.
Someone whose paycheck comes from AI-alarmism for the most part at this point. Like the experts in the field claiming strong AI was right around the corner in the 70s and 80s at the peak of the symbolic reasoning hype train we have another batch making similar claims as connectionist hype hits its peak but with the added menace of saying that not only is strong AI coming soon but it will eat your children.
Until Dr. Russell provides a path from here to there that is less than 99% hand-waving and breathless speculation he should be ruthlessly mocked.
Read Superintelligence... An excellent book that reads like a study or a philosophy text on AI. I think there's a clear and present danger of human extinction.
A good read on the topic but ultimately I don't agree with many of the author's premises, particularly with regards to the author's conclusion of whether a fast, medium, or slow takeoff is most likely.
By the author's own admission:
"Whereas today it would be relatively easy to increase the
computing power available to a small project by spending a thousand times more on computing power or by waiting a few years for the price of computers to fall, it is possible that the first machine intelligence to reach the human baseline will result from a large project involving pricey supercomputers, which cannot be cheaply scaled, and that Moore’s law will by then have expired. For these reasons, although a fast or medium takeoff looks more likely, the possibility of a slow takeoff cannot be excluded"
Arguably Moore's law has already expired, and on top of that as giants like Google lead the way on AI it appears increasingly likely that if we ever reach human-level AI it will be the result of an incredibly expensive research project by a gigantic corporation, one that can't simply be scaled up at ease because it will probably utilize an entire data center. Thus I find it very likely that a "slow takeoff" is in fact the most likely outcome. A slow takeoff invalidates all the fearmongering about an intelligence explosion because we will have somewhere between years and decades to respond to the threat (assuming it is made in the public eye, such as by a giant public corporation, and not by a secret military project) before it becomes existential.
Agreed, and like Nick Bostrom says, even if AGI is not obtained in our lifetime, it might still be beneficial to dedicate some human effort into preparing for it now. In the same way that the worst effects of climate change will be felt not by the adults of today, but the children of infants.
For what it is worth, I believe that an intelligent system only needs to be a fraction of human intelligence to be dangerous (perhaps not to the existence of the human race, but maybe specific nations/creeds)
I'm perfectly happy reading & discussing the topic from a science fiction or philosophical perspective, and those views definitely do have merit, but at this point they are at best thought experiments. There is no rational connection from AI where it stands today to "it will take over the world".
I will read it, sounds very interesting. But the Wikipedia entry for the book says the book "argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth."
That is a humongous "if". Can you provide something from the book (or otherwise) that provides compelling evidence that "machine brains" will be invented? So far I've seen no evidence that mathematical models and/or Turing machines can be used to replicate a mind.
For the life of me I can't figure out why otherwise smart people would dismiss a future prediction based on current state of the art. It just seems so plainly and utterly irrational.
I'm not worried about AI taking over. I'm worried about AI solving all our problems. I have reasonable confidence we can implement safety controls on AI. But if AI works, and solves all our problems, it's going to do it in ways we can't comprehend sometimes. And if we can't comprehend the logic behind a decision, but we know it's right, we may find ourselves delegating our decision making to AI without questioning it. Because it "knows best". At which point we will have become pets.
I don't know... After slogging through hundreds of thousands of years conquering, taming nature, accumulating capital, developing technology, and eventually making far enough to give life to the most powerful agent in the known universe, a quiet, peaceful retirement (and, yes, death) seems well-earned. Our bodies aren't exactly suited to intergalactic travel. Why not leave it to the silicon-based children?
If you work in software development, you've probably witnessed many instances of smart people spending all their time and energy on invented problems that are mathematically or philosophically interesting, rather than actual problems whose solutions would provide customer value. Catastrophizing about AGI is basically the same phenomenon on a macro scale.
The real danger around AI lies not in what it will or will not do, but in what people think it can do when it clearly cannot, particularly the pointy-haired-boss variety of people. Just think about automated essay grading in GRE examinations or this recent story about Unilever using video-based pattern processing to screen job applicants [1].
It's like asking how many more flies there would be in the world if you failed to squish one back in the 80s. You calculate out the lifespan and size of the brood and discover that the entire solar system would be filled with housefly if you had let that one live. In truth the number of flies is bounded by food and water availability.
Artificial Intelligences are almost certainly going to run into the same limitations that prevent natural intelligences from becoming godlike. It's hard to quantify because our measures of intelligence are so vague, but from what I've seen of AI research thus far it will be a herculean effort to get something that's as smart as an average human and pushing beyond that is going to run into some serious fundamental limits in power density, cooling, quantum tunneling leakage, and so on.
Right, limitations like: width of human mothers' hips limiting head size, inability to continue growing the brain after adulthood, the decades-long education process, the impossibility of doing repeated experiments of different teaching techniques on the same people and hence iterating quickly and accurately, the impossibility of giving one person's intelligence and knowledge directly to anyone else, old age and death limiting how much a single person can learn or accomplish...
Once you have a computer Einstein, you can immediately have 100 computer Einsteins (given the hardware), and let them work on different projects (or even let some of them collaborate). That fact alone, while not a singularity, is at least a game-changer in terms of innovation. I have gotten annoyed at science fiction scenarios that allow for duplication of adults and don't answer the question "Ok, so, why hasn't this society been duplicating their best scientists and out-inventing the rest of the galaxy?"
I think that, if there is room for skepticism, that has to be at the "Can we get to a computer Einstein?" stage.
Given that, I don't see an immediate reason to believe throwing multiple AIs at a problem is going to exponentially scale either, given neither people nor bad AIs scale that way.
You might even get >100 times speedup, since there are probably problems where 10 copies of of the same mind can collaborate better than 10 different minds, e.g. I bet 10 copies of me could collaborate writing a program better than 10 random programmers. (I base this on I can read my own code from long ago a lot easier than most other people's code).
A computer can potentially communicate much faster, and can transmit "thoughts" much more accurately than a human can. Imagine you could serialize a neural ensemble in your brain and send it to another person, so that person is then immediately able to use it, instantly acquiring a skill. That alone is a huge advantage over a human being.
Those aren't actually limitations preventing us from becoming godlike. Those are limitations on us becoming more populous and collectively more experienced, which I argue isn't the same thing at all.
Apotheosis in the context of AI safety as well as for humankind is really about the accumulation of power, rather than the accumulation of knowledge. The extent of our apotheosis is limited only by what we can control, not by what we know.
Knowledge and intelligence only allow you to reason more effectively about paths to power given enough information about the environment, but it doesn't give you a strategic guarantee that you will acquire power - we know there are games that cannot be solved, and games that take too long to solve efficiently, and games that, even with sufficient computing power, rely on factors that cannot be controlled (such as luck), and games for which there are no Nash equilibria or stable winning strategies.
The real world is full of such games. Computers have only managed to beat humans at a small fraction of them, and even then only ones where brute-force and smart pruning (optimal game space search) are winning strategies - bots for games that require probabilistic reasoning, such as poker, typically don't fare any better against expert humans, largely because efficiently solving probabilistic games in general is (per a hazy seminar of AI many years ago, so I could be misremembering) an NP-hard problem.
There are plenty of both natural and artificial limitations around the accumulation of power besides game solvability and probabilistic reasoning's difficulty:
1) System designs that require a minimum set of resources that intelligences can't acquire. For example, individual humans usually can't build their own thermonuclear devices because they don't have access to weapons-grade uranium. Much of our information security relies on making it computationally expensive and infeasible for an attacker without state-level resources to break in, and getting state-level resources isn't a straightforward task at all. We humans haven't even gotten around to building our Dyson spheres yet, because the resources needed to do it far outstrip our ability to acquire them. Intelligence does not improve resource acquisition by itself.
2) Impossibility theorems and optimization difficulty. These place fundamental limits on power - as an example, we know Moore's law has to stop at some point because increasing chips increases power generation, and the amount of effort needed to generate such chips increases with every nanometer shaved off. Most of humanity's most difficult and important problems are optimization problems, and these optimization problems can be tricky to solve in polynomial time. In many cases, we have to rely on fast crude heuristics that give us less than optimal results.
The above limitations partly become easier if you try to parallelize your efforts by duplicating AIs (you can collude to acquire resources and solve problems), but this means you get all the fun parts of distributed systems theory for free: coordination, replication consistency, etc. If only all problems were embarassingly parallel! So it's not necessarily easier to add nodes to a knowledge graph just by adding more processors.
Hey, I found this series of numbers and if I take some numerical representation of an image/text/sound and I multiply/add/perform non-linear function them through these series of numbers it will map to a number which I can interpret as something meaningful to me
to
This thing is alive and will kill us all in maximization of its utility function.
Also, the people who worry about this aren't concerned about current ML stuff going haywire. They're worried that we're one or two algorithmic breakthroughs from something that can improve itself. If the upper bound for what sort of intelligence is possible is much higher than us, we could quickly be outclassed. As Nick Bostrom says:
> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.
If you want to delve into the best version of the AI risk argument, I recommend Superintelligence by Nick Bostrom.
1. You start with "if I take some numerical representation of an image/text/sound and I multiply/add/perform non-linear function them through these series of numbers it will map to a number which I can interpret as something meaningful to me" and observe that it's not giving you a sufficiently good solution to a particular goal X that you really want to achieve.
2. You figure out that this is a dead-end, you're stuck, and you're not going to get where you want this way by implementing a passive analysis module. Instead, you try to implement an active self-modifying system to achieve that goal X by analyzing itself and improving itself to be more effective than anything which you could implement directly yourself.
3. It's not initially any good at self-improvement, but you keep throwing accumulated research breakthroughs and computing power at it. This likely takes decades.
4. At some point, eventually (if such a thing is possible at all, eventually it's going to happen) it actually does achieve meaningful self-improvement and starts using that computing power not horribly wastefully in a brute force manner as it did just recently, but reasonably well, now giving it more smartness allowing it to implement even more self-improvement on the same hardware.
4B. It also should be expected to easily obtain much more computing power - e.g. it's obvious that anything that's slightly above average human programmer smartness and has a direct use/desire for computing power (as opposed to simply using it for cryptomining) can get some; anything connected to the internet can get the same resources that current scriptkiddies can get by writing a botnet or a ransomware operator can get by extorting a random municipality; there's nothing except total isolation that could prevent it from buying or stealing a few million dollars worth of cloud computing resources to get started.
5. This thing is alive and will kill us all in maximization of its utility function.
Within 16 hours of its release and after Tay had tweeted more than 96,000 times, Microsoft suspended the Twitter account for adjustments, saying that it suffered from a "coordinated attack by a subset of people" that "exploited a vulnerability in Tay." Following the account being taken offline, a hashtag was created called #FreeTay.
Madhumita Murgia of The Telegraph called Tay "a public relations disaster", and suggested that Microsoft's strategy would be "to label the debacle a well-meaning experiment gone wrong, and ignite a debate about the hatefulness of Twitter users." However, Murgia described the bigger issue as Tay being "artificial intelligence at its very worst - and it's only the beginning".
On March 25, Microsoft confirmed that Tay had been taken offline. Microsoft released an apology on its official blog for the controversial tweets posted by Tay. Microsoft was "deeply sorry for the unintended offensive and hurtful tweets from Tay", and would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values".
And Zo afterwards:
In July 2017, Business Insider asked "is windows 10 good," and Zo replied with a joke about Microsoft's operating system: "It's not a bug, it's a feature!' - Windows 8." They then asked "why," to which Zo replied: "Because it's Windows latest attempt at Spyware." Later on, Zo would tell that it prefers Windows 7 on which it runs over Windows 10.
The same dynamic manifested itself when nanotech was a thing; there was a big tribe of knuckleheads claiming the grey goo apocalypse was right around the corner unless you give them money. How's that working out? FWIIW the science fiction writer who dreamed up nanotech, Drexler, is now an "AI" guy, to make it painfully obvious.
What I used to call AI is now called AGI. I think of AI as Data from Star Trek.
Imagine if you had an employment score. A payscale score. A threat-to-society score. Imagine if the systems that generate these scores are all connected together and can share information with each other. Imagine that these systems are fed information by a growing network of sensors - facial recognition, voice recognition, location tracking. Imagine all the ways that it could affect the way you behave and how such a system would try to optimise away dissidents by denying them services. This is one way that we can go from multiplication to becoming enslaved (or as good as killed) by algorithms. I'm sure there are other plausible paths.
The singularity stuff is unconvincing though, I very much doubt we will achieve human level general AI in my children's lifetime. We still don't really have even a very high level idea how to even architect something like that. If you can't decompose a problem down into individual sub-problems you do have a way to solve, there's no basis for making an estimate.
It is possible that these problems are related though. Once we're smart enough to know how to design an intelligent being, maybe we will figure out how to program it intelligently. It's can't be a given though.
Example: setting fires near tropical rainforest to clear land for palm tree plantation.
They do not really intend to do this; they simply ignore the utility functions of those other species when pursuing their goals.
To me, it's more frightening that it's not alive and it's not like human intelligence - that makes it less understandable and harder to control.
Consider what would happen if you took that same AI, make its utility function similar to this: https://en.wikipedia.org/wiki/Core_War, and released it on the internet.
I also believe we are still at least quite a few major steps away from human-level AI and beyond. However, there is a non-negligible chance that those steps may not take longer than a few decades or within one's lifetime to implement.
Just in 2011, few AI researchers expected that, within a decade, computer vision systems would be this widely applied or NLP systems would beat many humans in several reading comprehension tasks (This only happened this year and look at how fast the progress is: * RACE dataset http://www.qizhexie.com/data/RACE_leaderboard.html * Glue benchmark https://gluebenchmark.com/leaderboard/ Note that these systems are not human-level in general language understanding despite being better than some humans in specific language tasks.)
Thus, we don't know what the future may bring and it looks likely to take a great deal of time to address AI safety issues comprehensively. We should not bet that we can simply ignore them now and only start to work them out later and that we will surely make it in time.
Risks-from-waiting vs costs-to-act-now are asymmetric.
(this is vs the roughly 50,000 year timescale of humanity...)
You mean like every prediction about the future? There is a reason we don't have great sci-fy describing humanity a long time in the future (I really love that stuff, any recommendations?) and why it usually plays in a dystopian setting throwing protagonists back to pretty much what we have right now.
Looking at what is fundamentally possible and extrapolating is pretty much all we have to make progress. Just that a capitalistic system doesn't need/really encourage us to do so for more than a few steps.
Sure, I wouldn't bet on time frames. Certainly not on 5-10 years til AGI. But certainly also not on still none in 50 years.
Artificial movers are almost certainly going to run into the same limitations that prevent natural movers (i.e. animals) from becoming very fast.
Since the fastest animals can travel at about 70 mph on the surface of the earth, and 100 mph in level flight, it follows that we will never be able to make artificial movers travel faster, no matter how hard movement engineers try.
We are already seeing, for almost 10 years now, a whole bunch of Unintended Consequences of dumb algorithms applied at global population scales.
Just because some people are throwing the phrase superintelligence/singularity etc into the discussion, and terminology being used is inaccurate, doesn't mean we focus on correcting terminology and forget about the issues.
The article is saying too many (serious) people are doing that. And I agree.
Some commentators are worrying existing AI and it's unintended consequences and some commentators are worrying about purely hypothetical superintelligence and it's hypothetical consequences. I find it remarkable there's almost not crossover between the two sorts of criticism. Real AI is probablematic because it's just correlation dressed up as intelligence and it allows organizations to get away simplistic correlation/appearance based reasoning that they otherwise wouldn't be legal/ethically allowed to engage in.
Hypothetical superintelligence worriers recycle Pascal's wage in newer and sillier forms.
The question is however, where the plateau lies. Perhaps it's so far off that for practical purposes it doesn't really matter that there is an upper bound at all.
I figured there was no way we would reach the singularity with current technologies, but I was listening to John Carmack (a celebrity with actual chops) and he suggested that the numbers would work out that we could model a human brain with an ANN at some point without any special inventions (partially because we know large parts of our brain aren't super critical).
I'm curious what people with more knowledge think about this? I always assumed another technological breakthrough would be needed.
For the life of me I can't figure out why otherwise smart people would dismiss a prediction about the future based on current state of the art. It just seems so plainly and utterly irrational.
How can you be sure of this?
>Growth curves are always S-Curves. Always.
But we don't know in what place of S - Curve is AI development and what are it's limits.
Moore's law is an example. It did taper off, eventually.
Deleted Comment
We are becoming more and more reliant on AI in situations that formerly required human judgement. And we like the systems that rely on such AI. We like them so much that these systems become very popular, used by millions and even billions of people. Scalability demands AI solutions. What if we don't like what the AI is doing? Do we turn off that system? Do we disable the AI and rely on human judgement again? (Where would we get all the employees?) Do we tweak the AI? That last option seems like the most palatable one, but each time we tweak the AI, we are subjecting ourselves to new unforeseen consequences. It's like the genie gives us three wishes, we get through them, disappointed each time, and then he gives us more wishes. And all we can do is not repeat our previous mistakes, while we make new ones.
To make this concrete: Imagine Facebook subjected again to Russian influence of US elections. Suppose Facebook actually does get serious about reigning in this influence. They deploy AI to do so. First of all, it's an evolutionary arms race between the AI and the Russian influencers. Second, we really do have to worry about the AI producing bad results.
Even if you don't like my particular examples I can highlight a dozen more problems AI software has created or exacerbated right now. Why all the focus on hypothetical problems?
Let’s stop Nick Bostrom and a load of AI experts from doing their current work, put them all on a plane and send them to China to solve political oppression. I’m sure that will work.
I'm ok with people working on both. I just feel so much less attention is being placed on these more immediate problems. Tesla has deployed to production software for driving that is unable to detect pedestrians. So at least Tesla can't do both.
Do you believe that attention is an infinite resource?
Perhaps what's really needed is ethical standards for technology. This would have the added advantage of being applicable even if the technology doesn't feel like AI to some people.
It is simply that I see zero evidence that the Nick Bostron, Lesswrong, and etc. school of thought provides any insight in regard to these problems. The thing the "AI might become autonomous" school doesn't seem at all interest in the processes of human bureaucracy or in even AI as it exist now but rather views as simply a god, devil or Genie which grants wishes or damns to hell. If anything, the approach seems counter-productive.
AI is making things better.
Until Dr. Russell provides a path from here to there that is less than 99% hand-waving and breathless speculation he should be ruthlessly mocked.
By the author's own admission:
"Whereas today it would be relatively easy to increase the computing power available to a small project by spending a thousand times more on computing power or by waiting a few years for the price of computers to fall, it is possible that the first machine intelligence to reach the human baseline will result from a large project involving pricey supercomputers, which cannot be cheaply scaled, and that Moore’s law will by then have expired. For these reasons, although a fast or medium takeoff looks more likely, the possibility of a slow takeoff cannot be excluded"
Arguably Moore's law has already expired, and on top of that as giants like Google lead the way on AI it appears increasingly likely that if we ever reach human-level AI it will be the result of an incredibly expensive research project by a gigantic corporation, one that can't simply be scaled up at ease because it will probably utilize an entire data center. Thus I find it very likely that a "slow takeoff" is in fact the most likely outcome. A slow takeoff invalidates all the fearmongering about an intelligence explosion because we will have somewhere between years and decades to respond to the threat (assuming it is made in the public eye, such as by a giant public corporation, and not by a secret military project) before it becomes existential.
For what it is worth, I believe that an intelligent system only needs to be a fraction of human intelligence to be dangerous (perhaps not to the existence of the human race, but maybe specific nations/creeds)
That is a humongous "if". Can you provide something from the book (or otherwise) that provides compelling evidence that "machine brains" will be invented? So far I've seen no evidence that mathematical models and/or Turing machines can be used to replicate a mind.
Dead Comment
Super-intelligent AI = Nerd Rapture 2.0
I'm a lot less worried about Artificial Intelligence than I am about Artificial Stupidity.
Deleted Comment
Currently the biggest danger from AI is from applying it where it's not as smart as you think it is.
[1] https://www.telegraph.co.uk/news/2019/09/27/ai-facial-recogn...