Readit News logoReadit News
unholiness · 8 years ago
The further we get into the future, the more I think, what if our data-crunching approach to AI is simply the best thing there is?

Hofstadter said in GEB (in 1979) that the only program that could be capable of beating the best humans at chess would need to be a general and human intelligence... human enough to decline your suggestion to play chess, and suggest you talk about poetry instead.

It seems that people today are still engaging in the same sort of fallacy. I keep hearing that deep learning is too hyper-specialized, that it's a tool and not an intelligence, that we're still waiting for a revolution of general intelligence, where intelligences will have sophisticated logic and make their decisions without billions of data points, just like humans do.

My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it). And this isn't really a bad thing. We will still see revolutions allowing more and more general problems to be solved, revolutions allowing more and more general data to be considered, and revolutions giving more and more usable interfaces for inputing data and specifying problems. We'll still see data-driven intelligent assistants and data-driven board members making critical decisions like in everyone's utopian dreams/dystopian nightmares.

But these foretold "general" intelligences, who don't need excessive data, the intelligences who beat the Turing test, the intelligences who "want" things and "feel" things, who attempt to solve the problem of replicating humans... those come unimaginably far in the future. And when they do arrive, they won't really solve any problems that the data-crunchers haven't already solved better.

Eridrus · 8 years ago
> My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it).

I don't think this is necessarily true. Currently we're trying to push computers to do things that are relatively easy for humans to do. I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

So, I think we are actually data efficient learning machines, just that we have strong priors, and we spend a lot of effort training each other.

If you compare humans to machines on problems which are not intuitive for us, I think you will find machines to be more data efficient than we are.

goatlover · 8 years ago
> I would conjecture that these things are easy for us, not because we are amazing learning machines, but because we have millions of years of evolution, and years as infants with caring teachers going for us.

And so do chimpanzees. Evolution must have provided us with something additional, which would be our rather more developed cognitive abilities to employ abstract reasoning and metaphor.

Those abilities aren't learned, they're innate, and they allow us to think in ways that don't require large amounts of data. An average human being can be shown an Atari game like Pacman, and easily understand what the objective of the game is almost right away.

Deleted Comment

mindcrime · 8 years ago
It seems reasonable to me to believe that there are multiple ways to be "intelligent", and that different kinds of intelligences will excel at different tasks. When we think of "general intelligence" I think we default to thinking about "human intelligence" simply because it's the best example of any kind of general intelligence that we have access to. But I don't see any reason, in principle, to think that "machine intelligence", perhaps in the "data-cruncher" style, can't ultimately exceed human style intelligence.

I mean, we already know machines can be "smarter" than humans in narrow domains (Chess, Go, Checkers, calculating square roots, calculating integrals, etc.) so if we find a way to combine that with some kind of "generality", Bob's yer uncle.

freehunter · 8 years ago
I strongly agree. We many never see a machine that can fool humans into think that machine is also human, but then again why do we actually want that? Just for novelty?

We look at a dolphin and we can say "that is an independently intelligent animal" that we can't really do much with. We look at a dog and say "that is a useful, trainable, intelligent animal" that we as humans couldn't have survived without during parts of the history of our species. A dolphin is far smarter, but it doesn't matter because there's only one intelligent animal we couldn't have lived without, and it's actually pretty dumb compared to a dolphin.

The question is, do we want an AI that is smart by itself, or do we want an AI that is smart and useful to us? Those don't have to be the same thing, as evidenced by the dolphin vs the dog.

Humans are really, really good at producing very efficient machines with strong intellect, we do it by accident all the time. We don't need more humans, humans are flawed and humans have a lot of undesirable traits to go along with the intelligence. Strong, general AI will be its own species, with its own unique way of thinking and its own quirks. Trying to replicate humans exactly is futile and worthless.

We want a machine that can learn and do useful stuff. We don't need a human made of silicon for that. We need a mechanical dog.

vixen99 · 8 years ago
They are smart just as water is smart in finding the path of least resistance.
justinpombrio · 8 years ago
> My counter-hypothesis is this: Computers, compared to humans, will always be more data-hungry (i.e. worse at making general decisions without huge amounts of data) and more data-capable (i.e. better at making decisions with it).

Counterexample:

AlphaGo Zero is currently the best Go player in the world. (It beat AlphaGo, and AlphaGo beat humans.) AlphaGo Zero learned to play Go entirely by playing itself; it was given no training set at all.

k_sh · 8 years ago
It's still more data-hungry, though. AlphaGo can play many millions of rounds against itself in the time it takes a human to play one round.

Humans are many magnitudes more efficient at improving a skill, computers just appear to do it better sometimes because they can move faster.

emmab · 8 years ago
> capable of making decisions without billions of data points, just like humans do.

Doesn't this ignore transfer learning? Humans have orders of magnitude more than billions of data points over their lifetimes.

ajmurmann · 8 years ago
Computers get billions of data points for a single topic/task. Humans get exposed to ginormous amounts of data that's less focused. The amount of sensory information we are taking in is incredible. That information isn't focused on a single task, but we can apply learnings from one field and apply it to another. If a human is trained to sort out foul fruit the human already knows how to generally differentiate between different objects they are looking at, smelling etc. They already know about apples; they already knows that fruit can spoil. It's just combining existing knowledge and skills. On a more complicated task like learning a new language the advantages are similar. I'd bet that a sufficiently large, neutral network that already knows how to perform many tasks would be faster at learning new ones as well.
komaromy · 8 years ago
Additionally, we have absurd amounts of information baked into our genes that give us big head starts on network architectures, motion, vision, etc.
eli_gottlieb · 8 years ago
> The further we get into the future, the more I think, what if our data-crunching approach to AI is simply the best thing there is?

Technically speaking, as long as we equate "AI" with "machine learning", it's a downright trivial statement. The important thing in ML isn't just the presence of "data-crunching", it's the quantity of training data relative to the size and complexity of the hypothesis class.

As long as "intelligence" requires dealing with ambiguous sensorimotor data, it will involve some statistical component, and will therefore involve a "data-crunching approach" somewhere in it.

>But these foretold "general" intelligences, who don't need excessive data,

Hierarchical generative models already do phenomenally well at one-shot and small-sample learning against basically all previous ML methods. Yes, this includes deep learning.

mark_l_watson · 8 years ago
+1 I agree with you. Even though friends like Ben Goertzel believe and work hard to create AGI, I think short and medium term the path to much better AI will be in assistive systems. I manage a small machine learning team at a large bank and I am an all-in believer that systems built with deep learning, probabilistic graph models, <fill in any master algorithm you want here>, etc. will fundamentally change the way knowledge workers work and transform society. This belief makes me excited to go into work every morning.

I love Douglas Hofstadter’s work and I think I own all of his books. I am not very academic in my outlook. I love technology for what it lets me build. Reading Hofstadter is like looking into the mind of someone with a very different world view from my own.

orthoganol · 8 years ago
You're still extremely optimistic about the 'intelligence' of state of art data driven approaches, even if they aren't general intelligence. I'm not sure where that optimism is coming from.

The chess example... "They were wrong about AI never beating grand masters, they're going to be wrong about X". Well, if you make the board bigger, there won't be an AI system that can beat a human.

Play Dota 2, but introduce a random variable that can't be known beforehand by anyone, like things in the real world, and the AI will always be beatable.

Great for specific domains, obviously, but your optimism about doing more advanced stuff, perhaps doesn't seem so grounded.

goatlover · 8 years ago
> Play Dota 2, but introduce a random variable that can't be known beforehand by anyone, like things in the real world, and the AI will always be beatable.

I wonder about a board game that randomizes the rules in simple ways. A human could understand the rule changes and adapt. To what extent can software be trained to do that?

abecedarius · 8 years ago
He did write that about chess, but clearly labeled as a personal hunch.

Your own hunch about what’s unimaginably far off differs from mine, fwiw — I’m very unsure but would not bet against human-flexible AI in our lifetime.

guscost · 8 years ago
It's as if when considering a supersonic jet airplane, we were to ask, "When will it be able to power itself by catching fuel in flight?"

After all, some birds can do that, so we know it must be possible.

iopq · 8 years ago
Turing test is actually super easy to beat given enough resources. Just have enough data and imitate responses.

Chat bots already can convince humans that they are human. Not reliably, some people can still tell the difference or ask tricky questions.

There are only so many ways to tell if someone you're talking to is a bot. Bots can already spit out meaningful sentences.

It's the person who is testing the bot that's the limitation. They need to have VERY good intuition about when it's a person being silly or a bot that can't quite find the right response. If you read the Wikipedia article on the Turing test, you can see that computers have already been able to pass it.

lisper · 8 years ago
Something often missed when talking about GAI is that we humans are built by our genes in order to facilitate their (not our) reproduction. Genes don't care about anything, not even reproduction (except in the sense that water cares about flowing down hill), but one of the tricks they use to advance their "agenda" is to build brains that do care about things.

A lot of what we call "intelligence" is actually a side-effect of caring about things more than it is evidence of thinking. In particular, it's a side-effect of caring about the kinds of things that advance our reproductive fitness. For example: Hofstadter laments that, although computers can now trounce the best humans in chess, they don't look for "elegant moves" or decline to play and have tea instead. Hofstadter cares about these things because chess is more than an abstract mathematical construct. It is, like all sporting events, a social construct, one that distills the essence of competition where the participants care about who wins and who loses. And all of this derives from evolution where genes that build brains that care about winning competitions outperform genes that don't.

One of the things holding back computers from being GAIs is that we have not yet figured out how to make them care about anything, and so they cannot possibly understand the visceral difference between winning and losing, or the emotional angst of being up against a deadline or deciding to take a risk. All of these are part and parcel of everything we humans do. The ability to do math is just an interesting and useful side-effect, but it was never the main event.

Personally, I think it's a good thing that we don't know how to make computers care about things because once we figure that out they really will become potentially dangerous. Our desires are hard-wired into us by our genes. Once computers have desires of their own, their interests may align with ours, but that's not a given. And if they don't, that could be a really big problem.

arketyp · 8 years ago
Unsupervised and on-line learning has to be figured out first (perhaps by finally abandoning backprop methods, perhaps by finding some hybrid approaches); after that, I look forward to seeing veritable reinforcement learning and creature-like intelligences take shape.

I've always been fascinated by how little is needed in terms of feedback loops in order for something to appear alive, I could say almost soul-like. The image in my mind is always that of -- and I'd like to give a better example -- the heat-seaking homing missile. I'm surprised Hofstadter, who is all about self-referential loopiness does not appreciate this, because I strongly agree with him in a belief that self-reference is the essence of much mystery. Then again, I never heard him passionately entertain chaos theory either, or psychedelics for that matter. I think Hofstadter has a very particular take on things (he calls himself a picky person). He can afford being obstinate because he is no doubt a free spirit and a brilliant guy, but it does make him appear dismissive sometimes.

lisper · 8 years ago
> I've always been fascinated by how little is needed in terms of feedback loops in order for something to appear alive

You're not alone.

https://en.wikipedia.org/wiki/Braitenberg_vehicle

galaxyLogic · 8 years ago
True, but it's good to keep in mind that genes don't have an "agenda" either. They just are the way they are because they survived.

It's like having a neighbor who just won $100 million in lottery and thinking, vow, what is his secret? How could he do that? I don't think I could ever do that. How did he do it? The fact is he didn't have much to do with it, it was all pure chance.

lisper · 8 years ago
Right. That's the same point I was trying to make when I said that genes "care" about reproducing in the same way that water "cares" about flowing downhill. Neither genes nor water really care about anything, they just do what they do because physics.
kenjackson · 8 years ago
What frightens me is the scenario of human thought being overwhelmed and left in the dust. Not being aided or abetted by computers, but being completely overwhelmed, and we are to computers as cockroaches or fleas are to us. That would be scary.

This implies to me that his definition of intelligence is really centered on what we as humans do. And that is interesting, but less interesting than a more general notion of intelligence.

sorokod · 8 years ago
Not necessarily. It just means that ML based approach may be better than whatever it is we humans do in enough tasks to make the humans irrelevant.

Think calligraphy Vs. the printing press.

astrodust · 8 years ago
I'm sure some people thought the printing press was the devil incarnate as it took the human element out of books. Prior to that every letter, every page, was produced with some measure of human effort. Holding that work, reading those letters, was something special.

Now there's no direct physical connection between what we write and the book someone holds, yet we don't run around screaming that automated printing has destroyed writing.

With intelligence this is likely to be the same thing: AI can amplify regular intelligence just as the printing press can amplify the ability of one writer to reach more people.

runeks · 8 years ago
> It just means that ML based approach may be better than whatever it is we humans do in enough tasks to make the humans irrelevant.

Given that more and more people are employed in ML-related jobs, what makes you think advances in ML will render humans irrelevant? Seems to me like the opposite is happening right now.

dgreensp · 8 years ago
I think it’s really important to distinguish

1. pure problem-solving (chess, go) from

2. competency of behavior in the world (vision, navigating a maze), from

3. universal cognitive-emotional life (getting frustrated trying to accomplish a goal and trying a different approach; having competing drives that form the basis for goal-setting, like hunger, boredom, and self-preservation), from

4. more arbitrary-seeming, human-like cognitive emotions, like humor and beauty.

You can have 1 and 2 without anything directly resembling human intelligence, and 3 with animal-like intelligence, or you could make something completely alien. An appreciation of humor and beauty would be a great way to demonstrate you’ve made something like a virtual “human mind.”

There’s no reason computers couldn’t be better at all of these things, including writing better jokes. There’s a funny blog post somewhere about the idea of a computer writing superhuman-level funny jokes; I wish I remembered where!

dgreensp · 8 years ago
Also, what does it mean for intelligent computers to “leave us in the dust,” when we are “like fleas to them”?

1. They are so much more intelligent than us as to render us insignificant — because we all know that intelligence is what makes people significant and worthy.

2. They are better humans than humans, not just more intelligent in a problem-solvy way but more moral and compassionate as well; truly “better” (there has been sci-fi about this rather fanciful but easily written scenario)

3. We build “wilier” machines/software that, given the power to do so, can out-strategize us and win in battle, or outcompete us in the economy. This is quite possible. Obviously we should limit the power (physical, legal, etc) of this software. There are real legal and economic issues here — not to mention futuristic disaster movie plots that could become real — but not moral ones.

4. We build artificial life that’s way smarter than us, and it decides it doesn’t care about us because we’re such dumb simpletons. The same way we don’t care about bugs, presumably because they’re dumb, and not savvy wisecrackers like the main character in Bee Movie, voiced by Jerry Seinfeld. But why would we expect intelligent software to decide to care about us, anyway? To judge us and find we have merit? If someone told me they made a machine with a concept of what other beings are worth and it found me unworthy, based on reasons such as its being waaay more intelligent, I would not be surprised or impressed, or more than mildly insulted.

Edit: I guess people are worried about some combo of 2/3/4, where computers whose judgment we agree with basically say humans are lesser beings — like bugs — and we think about it and are like yeah, you’re right. Or computers are so human we are compelled to give them the rights of humans. I’m just not sure that actually makes sense, or at least it will take decades with many intermediate stages to talk about first.

empath75 · 8 years ago
I think one possible alternative is that we never build anything that approaches general intelligence, but we build a lot of mostly autonomous systems that are better than human beings in a lot of domains, and which may behave in ways that their creators never intended.

Once we allow ais to manage warfare and the economy with minimal human input, they are going to alter the face of the planet in ways that we can’t predict and probably faster than we can adjust to them.

It can happen in small steps with alogorithmic trading and battlefield drones gradually being given more and more decision making power and resources to control.

They don’t even have to have any sort of intention or independent will—only autonomy and power.

markan · 8 years ago
> There’s a funny blog post somewhere about the idea of a computer writing superhuman-level funny jokes; I wish I remembered where!

Maybe this one?

http://idlewords.com/talks/superintelligence.htm

GuiA · 8 years ago
3 and 4 are pretty hard to beat though, because humans have decades and decades of input and feedback loops with other humans to develop them.

Who’s gonna play hide and seek and read bedtime stories every night to the computers?

dgreensp · 8 years ago
“Embodiment” is most important for 2, and to fool humans into thinking you are a real human (as in the Turing test), able to talk about the human experience and relate to other humans.

Animals (including humans) are born with a version of 3 (and 4). Emotions are simple. Thought-level nuance is learned over time, but data learned by one android (or whatever) can be shared. Also, children are able to learn from comparatively few data points, and computers are beginning to do this too (see “one-shot” learning, learning from one occurrence, like the way kids sometimes pick things up).

If you make a child android that looks and acts like a child, people will read to it and play with it; there are movies about this! :)

danielam · 8 years ago
I thought the beginning of the interview showed promise, but toward the end the conversation unmoors from the topic at hand.

I really do wonder how many in the field of AI take as philosophically unsophisticated of an idea like intelligent computers seriously and see it as an obvious and uncontroversial possibility. What Hofstadter is dancing around is that intelligence requires semantics and semantics is exactly what computers, by definition, lack. Knowledge is semantic in nature and thus computers cannot, strictly speaking, know anything. They cannot reason because, again, reason requires semantics. Now, we are able to model and then formalize some domains of reality under some aspect well enough such that computers can behave in very useful ways, but no matter how sophisticated such programming gets, it cannot somehow magically cross over into semantics. The notion is patently absurd. So to say that AI is, literally, far from being intelligent is like saying the color red is far from being a strawberry. No amount of red will ever amount to a strawberry.

P.S. I was reminded of this post about Sphex wasps and intelligence, with mention of Dennett and Hofstadter: http://edwardfeser.blogspot.com/2013/12/da-ya-think-im-sphex...

orangecat · 8 years ago
Knowledge is semantic in nature and thus computers cannot, strictly speaking, know anything.

And a group of biological neurons can?

goatlover · 8 years ago
They can by virtue of being embodied. It's the body interacting with an environment that provides semantics. For humans, a lot of that is cultural. Computers only have knowledge to the extent that we deem those patterns of 0s and 1s to be information.
danielam · 8 years ago
Can they?
symstym · 8 years ago
"intelligence requires semantics and semantics is exactly what computers, by definition, lack"

This sort of assertion is experimentally untestable, not rigorously derivable from any axioms that people agree on, has no predictive power, and amounts to vague opinion passed off as fact. About a hundred years ago you might as well have been arguing about how "philosophically unsophisticated an idea like flying machines" was because flying is, strictly speaking, a behavior of flying animals.

jedharris · 8 years ago
Please explain how to tell if some being "has semantics". Suppose we encounter some alien made out of crystal and metal, but arranged in organic looking structures. We work out how to communicate, rather roughly (like Google Translate). They won't talk about their ancestors so we don't know if they were built or evolved.

How do we find out if they have semantics?

danielam · 8 years ago
Computers aren't a black box mystery. We understand how they work. We understand that they are implementations of Turing machines (or similar computational models), and Turing machines are by definition a formalism and thus syntactic in nature. Computers are therefore no more than syntactic machines (I would further claim that computers aren't strictly speaking syntactic machines either, but physical artifacts -- and we can use different physical properties and states toward this end -- that simulate a formalized, syntactical system). Syntax as such is semantically blind, so to speak. Human beings, on the other hand, do "possess" meanings. We have concepts. Pure syntax can never amount to a single concept anymore than purely speaking about climbing Mt. Everest ever itself amounts to actually climbing Mt. Everest. Any semantics we associate with computer programs is entirely an act of interpretation on the human end.
monochromatic · 8 years ago
> semantics is exactly what computers, by definition, lack

What do you think the definition of a computer is?

sgt101 · 8 years ago
A two minute old gazelle can out perform and current Ai in terms of navigating a real world. Data processing isn't the whole thing.
yathern · 8 years ago
A two minute old gazelle is born with a lot of pretrained instincts. Instincts built from a genetic optimization algorithm that's been going on for millions of years. Spacial reasoning and "run from predators" is not something it needs to be taught. The data processing is already baked in.
goatlover · 8 years ago
The point is that being "baked in" is something machines lack which biological intelligences possess. Animal brains don't learn those "baked in" abilities. We may need to "bake in" similar abilities into the AIs, since we recreating evolution is computationally prohibitive.
pjungwir · 8 years ago
I am curious if we will someday build AI systems that are stratified with layers alternating between "statistical" like neural networks and "symbolic" like the older AI approaches. For example once your image recognition NN is tagging things, you could feed those tags into a more symbolic reasoning system.

In theory Deep Learning should make this unnecessary (I guess?), but in practice these layers would be very useful. First they would make the system more interpretable. When your AI is a black box you don't learn anything. It would be nicer if we could gain heuristics or principles that we didn't know before, and apply them ourselves. Second with layers we wouldn't have to trust the AI so blindly. We could see if its thinking makes sense. In particular many AI applications can create feedback loops where it can perform well at first, but we'd like to keep monitoring it to ensure it stays that way.

If neural networks resemble our unconscious then symbolic systems resemble our reason---like "Thinking Fast and Slow". Some people say we must be able to do "better" than NNs since babies don't need to see millions of dogs to recognize them. But the pixel-inputs of a NN do seem a lot like the rods & cones inputs of our vision, and one huge advantage of human perception is seeing in the flow of time. Our observations aren't millions of discrete snapshots, but millions of connected moments. I wonder how image recognition training would improve if we trained by showing short videos instead of still images. It seems that would make it a lot easier to recognize boundaries and possible variations.

In humans, our unconscious is not something we can easily "see" and reflect on and correct, but our reason is reflective. We can articulate principles and opinions and judgments. We can re-use them and continually challenge them, question them, modify them, even reject them. There is a cynical idea that reason is nothing but post facto rationalization, and maybe it often is, but we needn't live that way. We can live an examined life if we choose. But I can't imagine a machine ever achieving that reflexivity without a symbolic system.

And isn't that awareness close to what we mean by "consciousness"? Somewhere, I think in Jacques Maritain, I came across the definition of consciousness as simply awareness of the self, in particular that the self exists. Being able to recognize ideas as "mine" and reflect on them also seems like part of what consciousness is all about.

pixl97 · 8 years ago
>Some people say we must be able to do "better" than NNs since babies don't need to see millions of dogs to recognize them.

I always hate that example.

A dog is not a thing. A dog is a collection of things. By the time a baby sees a collection of things that is a dog, it has already had a huge amount of experience in identifying individual parts, such as faces and eyes. Of course, I wish we knew how to program AI systems that way.

> I came across the definition of consciousness as simply awareness of the self, in particular that the self exists.

If you are intelligent you can cause enough complex changes to the environment around you, this means you can get caught in 'advanced' feedback loops. If you don't want to waste massive amounts of energy, or even die, it is beneficial to be able to separate "this was caused by me and my actions" from "this was someone/thing else that did this".

arikrak · 8 years ago
This is not at all how computers play chess:

> A computer can beat a human at chess not by searching for the satisfaction of making an elegant move, but sifting through millions of previously played games to see which move is more likely to lead to victory.

ubernostrum · 8 years ago
A lot depends on the era you're talking about, for computer chess. It used to be that forcing a chess computer out of its "book", so that it had to start relying on move-tree searches much earlier in the game when that tree is still monstrous in size, was a good or at least OK tactic. As the computers got better, and as more effort was put into anti-anti-computer tactics, it stopped being useful, but to say that this is "not at all" how computers have played chess is incorrect.
yesenadam · 8 years ago
And it's also not what they said.

Deleted Comment

xapata · 8 years ago
Not historically for chess, but it's a reasonable analogy for how AlphaGo plays Go.