I'm honestly not sure why artificial intelligence comes up every time Penrose's hypothesis is mentioned. The point of artificial intelligence is not ( at least in my and several other prominent AI scientist's such as Andrew Ng's opinion) to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do. Whether or not it's conscious along the way is largely irrelevant.
That being said, I'm not sure why there's quite so much vitriol towards Penrose and his hypothesis, the leveraging of quantum effects in photosynthesis and enzymes have been demonstrated and recent studies show that the sense of smell may also be based upon quantum phenomena. So it's not all too unreasonable that there might be something quantum going on, even if it's not Penrose and Hameroff's microtubules.
"Quantum effects" is not the same as "quantum computing". This is the same bullshit as the D-Wave marketing, just in a different domain.
Quantum computing happening in the brain is an extremely extraordinary hypothesis, and requires some very good evidence before it's accepted. Quantum effects happening on the brain is a "well, duh, and you'll tell me water is wet later?" hypothesis, and requires some good evidence not to believe in it.
> The point of artificial intelligence is not ( at least in my and several other prominent AI scientist's such as Andrew Ng's opinion) to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do.
That's only one school of AI ("weak AI"). The other school ("strong AI") says that it's certainly worthwhile to create an intelligence that is capable of thought the way humans are -- if only to get a better handle on what "thought" and "intelligence" really are. Currently weak AI is "winning" because the surveillance and online-advertising industries can get more use out of it. But that doesn't put strong AI out of reach, nor render it wholly irrelevant.
Currently weak AI is winning, but you forget that weak AI has always historically won. The annals of history ( as in the last ~100 years :P ) are littered with field attempts at finding "consciousness" and mixing neuroscience with philosophy with the occasional mathematician and once in a blue moon a CS...
My skepticism comes less from a hatred of "Strong AI" and more from the fact that it has been promised over and over again with no results to show for it. In addition, every time "weak AI" makes progress, there's always someone who writes an article bashing the progress of "weak AI" and saying "it isn't REAL AI (tm)".
This is kind of similar to the reason I dislike neuromorphic architectures, you can't just assume that you're right and look down upon all others, you need to show results if you want to do that.
That isn't the common distinction made between weak and strong AI. Until we know what causes the internal point of view, it's conceivable that an AI could exhibit superhuman intelligence, as viewed from the outside, without itself having any view from the inside.
Peter Watts' Blindsight is a great read that explores this topic. In short, an alien space-faring species is discovered that seems extraordinarily intelligent, but not conscious. There's a couple more twists exploring consciousness I don't want to give away but it really makes one wonder - what if high intelligence doesn't require self-awareness?
> what if high intelligence doesn't require self-awareness
How could that work concretely? I'm struggling to think of how an agent could be intelligent, e.g. able to effectively achieve its goals, without being aware of itself.
I suspect that, if you could communicate with one, any sufficiently intelligent being would be aware that it's a thinking being.
What else does "self-awareness" encompass that I'm missing?
What does self-awareness have to do with consciousness? Self-awareness can be tested for. Consciousness can't; for all I know, I'm the only conscious person in the universe. Some people are evidently self-aware by their actions. The two concepts are unrelated.
Relevance is relative. So, relevant to whom? It would of course be relevant to the consciousness created, but how would the creator/programmer of the AI even recognize it as being conscious?
I think the reason that Penrose's hypothesis is brought up is that the study consciousness are a part of human philosophy since its very beginning. Consciousness is seen in people to be totally the domain of humans and Strong AI represents either instills a sense that humans aren't special that they have consciousness and that is why there is a divide where people like Penrose things Strong AI is impossible and people like Kurzweil thinks Strong AI is possible.
Chalmers, Nagel & McGinn arguments are that there is a fundamental explanatory gap between objective processes and subjective experience. It's not because humans are special. Many other animals might be conscious. In Chalmers case, any informationally rich stream of data could be conscious, or possibly any physical system, if one wants to go all the way with panpsychism. The issue is the hard problem, not the uniqueness or humanity.
> Whether or not it's conscious along the way is largely irrelevant.
Exactly.
It's the distinction between programming consciousness and physically reproducing it. The latter is for physicists and biologists.
Equating abstractions with what it's abstracting is a mistake, but assuming something cannot be programmed or simulated is also a mistake. Anything observable can be abstracted.
But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.
And finally, to equate the two when the simulation is accurate is yet another mistake.
It's like mistaking the gravity in a video game for real gravity. They are never the same. But they are related: One is a simulation of the other.
> But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.
Biological systems work and evolved into intelligent self conscious beings without anyone who understands how they work guiding this evolution. Similarly self conscious AI can emerge without people having good understanding of the source of the consciousness.
> It's the distinction between programming consciousness and physically reproducing it.
Except that there is none: consciousness is "programmed" into our brains (by hard-wiring + social experiences); artificial intelligence is "physically reproduced" as a mesh of transistors and higher-level structures built from them - logic cells, LUTs, FPGAs, PSMs, CPUs, HPC clusters, etc.).
(Partially relying on software is not an issue: even a brain completely simulated in software could, in principle, exhibit consciousness just as well as a "wetware" one can.)
Even if some quantum effect is required for consciousness, and even if consciousness is required for intelligence, I'd argue that this does not mean we cannot simulate this - the Universal Quantum Computer[0], for example, was proposed by a researcher at the same institution at around the same time as Penrose when he wrote The Emperor's New Mind.
I don't think quantum physics is the correct level of abstraction to think about consciousness. Consciousness, if anything, is a process at macro level, not atomic level.
Chain of taken for granted assumptions: Intelligence = consciousness = sentience[1].
[1] sentience defined as feeling or sensation distinguished from perception and thought, or possibility of suffering (mental state sentient subject wants to avoid).
It's possible to conceive high level intelligence without consciousness. It's possible to conceive consciousness without sentience. Just having awareness of internal state, preferences and goals does not imply suffering.
>> The point of artificial intelligence is not ( at least in my and several other
prominent AI scientist's such as Andrew Ng's opinion) to create a conscious
intelligence, but to create intelligence that can do many of the useful tasks
that we can do.
Your syntax is suggesting that you consider yourself to be a prominent AI
scientist like Andrew Ng, but I'll assume that's just a mistake!
As to what is really the aim of AI as a field, there are two issues here. On
the one hand, the original AI project as conceived back in the early '50s by
the likes of Turing indeed aimed at getting computers to mimic human
intelligence. Everything that has happened since then in the field has
followed from that original and grandiose quest.
It's true that in more recent years the applications of AI have tended to be
aimed at much humbler goals, like performing specific, isolated tasks at a
level approaching (and sometimes surpassing) the human- select tasks like
image classification or machine translation etc.
However, this is by necessity and not by choice. Think of it this way: if we
would benefit from a system that can perform select tasks as well as humans,
then we would benefit a lot more by a system able to perform _all_ conceivable
tasks that well (or better). In that sense, the motivation of the original AI
project remains alive and well.
Consciousness of course seems to be an integral part of human intelligence-
and it's reasonable to expect that it would be a prerequisite for strong AI
also. In that sense, yeah, I'm afraid the discussion about consciousness is
part of AI and will remain that for a long time to come.
Consciousness is vital to AI because there are vast amounts of data that can only be measured through consciousness — data that AI absolutely requires to solve the hardest human problems. This type of data is called "qualia" — the experience of pain, for example. An unconscious AI can process data about what causes pain: brain activity. Only a conscious AI can process data about pain itself, and only because a conscious AI measures pain from its own experience of pain.
The same can be said of any conscious experience that follows the spectrum from fear to love, terror to joy. If we want AI to solve human problems, it will need data about what it feels like to experience reality, and that data can only come by being conscious itself.
Whether or not ai programmers and researchers are attempting to create consciousness, the question of whether it's possible for them to do so is relevant to people who are interested in what consciousness is.
>The point of artificial intelligence is not to create a conscious intelligence, but to create intelligence that can do many of the useful tasks that we can do.
No. If it isn't conscious it isn't intelligent. There is no AI without consciousness because a non-conscious entity cannot make decisions independently -- react intelligently to its environment.
> This past March, when I called Penrose in Oxford, he explained that his interest in consciousness goes back to his discovery of Gödel’s incompleteness theorem while he was a graduate student at Cambridge. Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven. “This, to me, was an absolutely stunning revelation,” he said. “It told me that whatever is going on in our understanding is not computational.”
This is a very strange conclusion to make. Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems. And they do not, in fact, apply to particularly weak systems (e.g. sentential logic). Why would Penrose think that Godel has anything to do with consciousness? All it seems to have done is prove that, in some systems, there are known unknowables (specifically, that the consistency of sufficiently-complex systems cannot be proved).
If anything, it should lead us to take a train of thought similar to Chalmers': consciousness is unknowable (maybe kind of like God), but even that's a stretch. Because, like I mentioned above, Godel's theorems are about formal systems. Not only is the real world not a formal system, but (at least on the quantum level), it's also non-deterministic. Now, there are probabilistic logics out there that follow Godel's findings, but there's a lot of work that needs to be done to bring that in the real world.
> Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems
Turing machine is a tightly-controlled formal system. Either the brain is a Turing machine, which means it can be accurately simulated on our computers and it is a subject of the limitations of the Godel theorem, or it is not a Turing machine, which Penrose argues for, in which case it can't be accurately simulated on Turing machines.
As a trained philosopher and logician, I completely understand how a Turing machine relates to Godel's Theorems, but I think you're passing the buck here.
You can make all kinds of claims:
- Either the brain is a V12 engine or it's not
- Either the brain is a perfect circle or it's not
- Either the brain is X or it's not
All these statements are trivially true, but it's not like I look at a V12 engine thinking "hmm, I wonder if it's conscious" (some pan-psychists do but that's still weird to me). Similarly, looking at the property of a formal system and jumping to consciousness makes (to me) little sense.
Besides, my knock-down argument goes something like this: I can concede that our brains are machines, but suppose I believe they are Presburger counting machines, where Godel's incompleteness does not apply. What then? There is no insight in that claim. I just think Turing machines and Godel are used as a bait and switch. Because, really, any theory of consciousness will not have anything to do with either.
Well I like to think of it this way. We know that the brain is at least a Turing machine since we can perform the same calculations. If the brain is more powerful than a Turing machine than any attempts to replicate it in a general sense will not work.
A pushdown automaton can solve certain problems that a finite state machine can't. Finite state machines can't solve the general palindrome problem, but they can be designed to solve palindromes of a bounded length / alphabet. The complexity explodes with the length of the string and number of characters, but it can be done.
To me our attempts at creating a general AI with a turing machine is starting to take a similar shape. We know we can generate an algorithm (machine learning) that can solve a contained problem.
Indeed. There is nothing to indicate that human brains have done anything that can't be achieved with polynomial time heuristic search algorithm.
Most people who have read the "Emperors New Mind" have been surprised that Penrose don't' seem to realize that. He has very strong intuition about consciousness and cognition but he can't explain it to others.
Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?
Also why should I care a heckin' heck about a heuristic time polynomial search? I'll work out complexity when I have bounds on correctness... not before.
The argument that those that find something beyond formalism must formalise it in order to be regarded as serious is not a serious argument!
> > Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven
This is not correct to begin with. Incompleteness in this setting means that a formal system is either unprovable or indeed inconsistent (ie. wrong). A "known unknown" is just an oxymoron.
> that the consistency of sufficiently-complex systems cannot be proved
that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.
> that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.
Nope. Godel's Incompleteness Theorems (GIT) are metalogical[1]. They leak to any (sufficiently-complex) formal system. You can't logic them away by going up a level of abstraction.
Let's say you have a (sufficiently-complex) system A which is susceptible to GIT. You can describe A in terms of a meta-system B. By definition, B will also be susceptible to GIT. You can describe B in terms of C, and so on, but every meta-system that follows will still be susceptible to GIT.
For an intelligent response to Penrose see Scott Aaronson : http://www.scottaaronson.com/blog/?p=2756. Aaronson recognizes Penrose's genius while still disagreeing vigorously with his out-of-the-mainstream ideas.
One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.
Aaronson spends far too little time using the real cannons he's leveling at Penrose. He is attempting to down a tree by focusing snipping roots individually instead of just pushing the rotting trunk over.
Penrose's argument is hollow. We understand the biophysics behind how the brain works. They aren't complicated at the level of detail you need to understand how the system works. We understand how neurons interact with each other. The evidence is consistent not only with our well settled understanding of chemical and biological systems, but also increasingly consistent with our development of information systems at scale.
The real gap is whether or not the totality of 'consciousness' is really just neural interactions at scale + starting state data, but the more we learn, the more that mystery vanishes. We understand the low-level perceptor->analysis models much better now, and we can map perceptor inputs at scale to outcomes in model tuning. In short, the consciousness of the gaps is rapidly losing his hiding spots.
Penrose's argument is taken seriously because we have collectively created a tremendous philosophical and institutional infrastructure around the idea of free will and the theory he attacks strongly implies there is some level of determinism in our cognitive systems. Since he is irreproachable on a personal or intellectual peerage level, he is a fantastic champion of this counter-cultural perspective.
However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
>However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
You are saying that if he is right then it's probably not accurate?
>But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.
Free will is an entirely different issue. Consciousness is about subjective experience, and why any physical system would have it. Determinism is completely irrelevant.
A complementary thing I like about Aaronson's response is that he does the same for his side of the argument: He acknowledges that there are difficult and unsolved issues for the strong AI viewpoint. When a proponent of either side minimizes, or tries to avoid, the gaps and difficulties with his point of view, he creates a straw man for the other side to attack, and a battle between straw men will not lead to meaningful progress.
Penrose is unique genius. His deep geometric intuition is extremely rare even among mathematicians. He uses his superior geometric intuition to teach physics in the book "Road to Reality" in a level that surpasses even Feynman.
There is common underlying assumption that the hard problem of consciousness is tied to high level cognitive or computational capabilities. I don't see the connection. The crux of being conscious is having the cognitive ability of being aware-consiouns-attentative-reflective at least tiny amount of time. If we could scientifically determine/agree what consciousness is, we should be able to make nice hyper-aware-of-blue-and-knowing-it robot and it would be relatively simple one.
"[Penrose] explained that his interest in consciousness goes back to his discovery of Gödel’s incompleteness theorem while he was a graduate student at Cambridge. Gödel’s theorem, you may recall, shows that certain claims in mathematics are true but cannot be proven. “This, to me, was an absolutely stunning revelation,” he said. “It told me that whatever is going on in our understanding is not computational.”"
Penrose starts from a specific conclusion, that formal systems are limited in ways that he clearly is not, and then searched for some way to explain the difference.
That's kind of the modus operandi of physics, I suppose. Max Planck started from a specific conclusion, that the spectrum of the Sun is limited at high frequencies in ways that Maxwell's theory clearly is not, and then searched for some way to explain the difference.
The difference being that Planck knew what the reality was (we knew what the spectrum of the sun looked like), but with consciousness we don't really know what the reality is (there is no "spectrum" of the mind for us to match).
The first order Peano axioms are not categorical but are recursively enumerable. The second order axioms are categorical but not recursively enumerable.
There are statements that are true in Z but not true in a non-standard model of the first order axioms. Such a true statement is not something that can't be proven true. It just can't be proven true using the first order system.
If I understand this correctly, does it not imply that there is a class of statements/ideas which Penrose (or I) can deduce, but which cannot be algorithmically proved? - If so my next question is: what is an example of such a statement?
1. Let's suppose for sake of argument that humans really can see the inherent truth of "Peano Arithmetic is consistent". That doesn't mean humans violate Gödel's Incompleteness Theorem: it could just mean that humans use axioms stronger than PA.
2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.
3. When scientists proposed Quantum Mechanics as a replacement for Classical Mechanics, it was on them to explain how Quantum Mechanics simplified to Classical Mechanics in the common case. "Penrose Mechanics" is an even more radical departure — especially from a physics of computation standpoint, as Penrose Mechanics by definition would allow solving at least some of the problems in (ALL - R) in ~polynomial time. Penrose needs to explain how Penrose Mechanics reduces to Quantum Mechanics in the common case.
4. Penrose proposes that (a) there exist new physics, (b) that evolution has learned to computationally exploit the new physics via microtubules, and yet (c) that humans are the only lineage to make use of this feature of microtubules, even though microtubules are found in all eukaryotic cells (from mushrooms to amoebae). From a predator-prey standpoint alone, it would seemingly be a huge evolutionary advantage to be able to compute NP or R functions in polynomial time. (That ability is not _strictly_ implied by Penrose Mechanics, but it's a very likely consequence.) Penrose needs to explain why only humans are taking advantage of the computational power of microtubules, when microtubules have existed for billions of years and across millions of species.
>2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.
Why are humans not logically consistent then, if they are as materialists claim, something that can be abstracted with a computer program if we have full information of their workings?
Um, you seem to be operating on a confusion of ideas. Materialism does not imply that humans are logically consistent. The universe in which we exist is (probably?) a logically consistent system, but that's true for both materialism and non-materialism. The difference between the two is which set of rules the universe runs on, not whether those rule sets are internally consistent.
When I say "systems that are perfectly logically consistent" and "humans... aren't", I'm saying that the ideas humans have in their heads are not logically consistent. It's possible to write down "2+2=5" on a piece of paper, even if 2 plus 2 doesn't actually equal 5, and it's likewise possible for humans to believe "2+2=5" even if 2 plus 2 doesn't equal 5.
All the mammals have roughly the same brain architecture and the same DNA. Whatever makes brains work is present at the mouse level in some form. We really ought to be able to build a mouse brain by now. A mouse brain has about 75 million neurons. That's not a big number for modern hardware. If we knew what to build, it would probably fit in a 1U rack.
Some years ago I met Rodney Brooks, back when he was doing insect robots. He was talking about a jump to human-level AI as his next project. I asked him, "Why not go for a mouse next? That might be within reach." He said "I don't want to go down in history as the man who created the world's greatest robot mouse." He went off to do Cog [1], a humanlike robot head that didn't do much. Then he backed off from human-level AI, did vacuum cleaners with insect-level smarts, and made some real money.
It is really not that simple unfortunately, CMOS the only submicron technology that can be produced with sufficient repeatability and noone has come up with a proposal yet how to produce a CMOS chip with that integration density, what you have to realize is that each neuron has typically up to 1000 connections to other neurons so you need to fit in 75*1000 synapse circuits in your design as well. That is already 2 orders more transistors than any currently produced chip.
Yes, but one mammal has created a technological civilization.
We did, but is that really categorically different than groups of primates using primitive tools [1]?
I think the idea that humans are categorically different from other species is misguided. Instead consciousness and intelligence seem to be more continuous than discreet, particularly when looking at semi-intelligent animals like monkeys, dolphins and octopi. Animals in that class can all learn pretty complicated tasks and are able to make use of tools. Self-awareness and consciousness isn't something we understand fully enough to even exclude all animals from possessing.
The only thing that seems particularly unique about humans is our ability to use complex language and record it. Passing down knowledge from one generation to the next is the _only_ reason we have "technological civilization".
For the completely opposite view, listen to Daniel Dennet on The Life Scientific [1].
Dennet argues that combining Darwin's strange inversion of reason (complexity from bottom up iterative refinement) with Turing's Universal Machine provides a way of understanding how we are machines, built of mahcines, built of machines, etc. and it is the heirarchy that allows the complexity of minds to emerge.
That heirarchical iterative schemes are unexpectedly powerful is well mirrored by the recent successes of deep neural nets, and Dennet cites Hinton.
It's worth a listen and summarises his new book From Bacteria to Bach.
Can anyone here defend this theory? I'm genuinely curious. It may well be the case that human consciousness relies on quantum effects. But we know that we can simulate a quantum computer using a regular computer. Which means that in principle, you don't need QM to have consciousness, even if human consciousness makes use of QM.
So, while it may or may not be true that the brain uses QM, it doesn't seem to really explain anything of interest. It doesn't make consciousness any less mysterious, or give any real insight into how we might create or understand our own consciousness.
Given that (or refute the premises, if you please), why is this theory interesting, relevant, or correct?
Penrsoe explicitly excludes Quantum Computing from the basis of consciousness. Unfortunately I didn't find the exact interview where we was stating this, but he was clear on this:
Quantum Computers are computers after all and he is talking about non-computable physics.
The reasons he looks at Quantum it's because it seems to be missing something fro our understanding(the "Reduction of the Unitary evolution"), and he "hopes" that this is non-computable.
Sort of. But fundamentally, if there is a quantum phenomenon that our brains are harnessing, then so too can a quantum computer. So I don't see how he can draw such a distinction, even in theory.
I think we're so far from understanding consciousness that having a theory of what makes it work (or even that "if it works, it relies on thing X") is dodgy.
E.g. let's suppose I make the claim that some people are conscious and others just seem like they're conscious. Can we test this?
Quite respectable people have theorized -- or should I say bloviated? -- that consciousness is merely an emergent property of complex systems. Again, we can't really define what consciousness is, so it's an interesting dinner party conversation starter, but it's not falsifiable.
That being said, I'm not sure why there's quite so much vitriol towards Penrose and his hypothesis, the leveraging of quantum effects in photosynthesis and enzymes have been demonstrated and recent studies show that the sense of smell may also be based upon quantum phenomena. So it's not all too unreasonable that there might be something quantum going on, even if it's not Penrose and Hameroff's microtubules.
Another interesting quantum hypothesis in neuroscience is Fisher's: https://www.quantamagazine.org/20161102-quantum-neuroscience...
Quantum computing happening in the brain is an extremely extraordinary hypothesis, and requires some very good evidence before it's accepted. Quantum effects happening on the brain is a "well, duh, and you'll tell me water is wet later?" hypothesis, and requires some good evidence not to believe in it.
That's only one school of AI ("weak AI"). The other school ("strong AI") says that it's certainly worthwhile to create an intelligence that is capable of thought the way humans are -- if only to get a better handle on what "thought" and "intelligence" really are. Currently weak AI is "winning" because the surveillance and online-advertising industries can get more use out of it. But that doesn't put strong AI out of reach, nor render it wholly irrelevant.
My skepticism comes less from a hatred of "Strong AI" and more from the fact that it has been promised over and over again with no results to show for it. In addition, every time "weak AI" makes progress, there's always someone who writes an article bashing the progress of "weak AI" and saying "it isn't REAL AI (tm)".
This is kind of similar to the reason I dislike neuromorphic architectures, you can't just assume that you're right and look down upon all others, you need to show results if you want to do that.
But weak AI might be included in that.
How could that work concretely? I'm struggling to think of how an agent could be intelligent, e.g. able to effectively achieve its goals, without being aware of itself.
I suspect that, if you could communicate with one, any sufficiently intelligent being would be aware that it's a thinking being.
What else does "self-awareness" encompass that I'm missing?
It's certainly relevant whether your AI technique involves creating conscious, suffering beings!
Exactly.
It's the distinction between programming consciousness and physically reproducing it. The latter is for physicists and biologists.
Equating abstractions with what it's abstracting is a mistake, but assuming something cannot be programmed or simulated is also a mistake. Anything observable can be abstracted.
But to then think something can be accurately simulated without a good understanding of its nature is another mistake. So programmers should be following physics and biology.
And finally, to equate the two when the simulation is accurate is yet another mistake.
It's like mistaking the gravity in a video game for real gravity. They are never the same. But they are related: One is a simulation of the other.
Biological systems work and evolved into intelligent self conscious beings without anyone who understands how they work guiding this evolution. Similarly self conscious AI can emerge without people having good understanding of the source of the consciousness.
Except that there is none: consciousness is "programmed" into our brains (by hard-wiring + social experiences); artificial intelligence is "physically reproduced" as a mesh of transistors and higher-level structures built from them - logic cells, LUTs, FPGAs, PSMs, CPUs, HPC clusters, etc.).
(Partially relying on software is not an issue: even a brain completely simulated in software could, in principle, exhibit consciousness just as well as a "wetware" one can.)
[0] https://en.wikipedia.org/wiki/Quantum_Turing_machine
Seems morally relevant re: does the suffering of AI entities matter.
[1] sentience defined as feeling or sensation distinguished from perception and thought, or possibility of suffering (mental state sentient subject wants to avoid).
It's possible to conceive high level intelligence without consciousness. It's possible to conceive consciousness without sentience. Just having awareness of internal state, preferences and goals does not imply suffering.
Your syntax is suggesting that you consider yourself to be a prominent AI scientist like Andrew Ng, but I'll assume that's just a mistake!
As to what is really the aim of AI as a field, there are two issues here. On the one hand, the original AI project as conceived back in the early '50s by the likes of Turing indeed aimed at getting computers to mimic human intelligence. Everything that has happened since then in the field has followed from that original and grandiose quest.
It's true that in more recent years the applications of AI have tended to be aimed at much humbler goals, like performing specific, isolated tasks at a level approaching (and sometimes surpassing) the human- select tasks like image classification or machine translation etc.
However, this is by necessity and not by choice. Think of it this way: if we would benefit from a system that can perform select tasks as well as humans, then we would benefit a lot more by a system able to perform _all_ conceivable tasks that well (or better). In that sense, the motivation of the original AI project remains alive and well.
Consciousness of course seems to be an integral part of human intelligence- and it's reasonable to expect that it would be a prerequisite for strong AI also. In that sense, yeah, I'm afraid the discussion about consciousness is part of AI and will remain that for a long time to come.
The same can be said of any conscious experience that follows the spectrum from fear to love, terror to joy. If we want AI to solve human problems, it will need data about what it feels like to experience reality, and that data can only come by being conscious itself.
- Dijkstra
No. If it isn't conscious it isn't intelligent. There is no AI without consciousness because a non-conscious entity cannot make decisions independently -- react intelligently to its environment.
This is a very strange conclusion to make. Maybe someone can elucidate. Godel's theorems apply to tightly-controlled formal systems. And they do not, in fact, apply to particularly weak systems (e.g. sentential logic). Why would Penrose think that Godel has anything to do with consciousness? All it seems to have done is prove that, in some systems, there are known unknowables (specifically, that the consistency of sufficiently-complex systems cannot be proved).
If anything, it should lead us to take a train of thought similar to Chalmers': consciousness is unknowable (maybe kind of like God), but even that's a stretch. Because, like I mentioned above, Godel's theorems are about formal systems. Not only is the real world not a formal system, but (at least on the quantum level), it's also non-deterministic. Now, there are probabilistic logics out there that follow Godel's findings, but there's a lot of work that needs to be done to bring that in the real world.
Turing machine is a tightly-controlled formal system. Either the brain is a Turing machine, which means it can be accurately simulated on our computers and it is a subject of the limitations of the Godel theorem, or it is not a Turing machine, which Penrose argues for, in which case it can't be accurately simulated on Turing machines.
You can make all kinds of claims:
- Either the brain is a V12 engine or it's not
- Either the brain is a perfect circle or it's not
- Either the brain is X or it's not
All these statements are trivially true, but it's not like I look at a V12 engine thinking "hmm, I wonder if it's conscious" (some pan-psychists do but that's still weird to me). Similarly, looking at the property of a formal system and jumping to consciousness makes (to me) little sense.
Besides, my knock-down argument goes something like this: I can concede that our brains are machines, but suppose I believe they are Presburger counting machines, where Godel's incompleteness does not apply. What then? There is no insight in that claim. I just think Turing machines and Godel are used as a bait and switch. Because, really, any theory of consciousness will not have anything to do with either.
A pushdown automaton can solve certain problems that a finite state machine can't. Finite state machines can't solve the general palindrome problem, but they can be designed to solve palindromes of a bounded length / alphabet. The complexity explodes with the length of the string and number of characters, but it can be done.
To me our attempts at creating a general AI with a turing machine is starting to take a similar shape. We know we can generate an algorithm (machine learning) that can solve a contained problem.
Most people who have read the "Emperors New Mind" have been surprised that Penrose don't' seem to realize that. He has very strong intuition about consciousness and cognition but he can't explain it to others.
Like create culture, write the works of Shakespeare or create a workable theory of the mind of others? Or fall in love, teach a child to play cricket or understand (and act on) an Opera?
Also why should I care a heckin' heck about a heuristic time polynomial search? I'll work out complexity when I have bounds on correctness... not before.
The argument that those that find something beyond formalism must formalise it in order to be regarded as serious is not a serious argument!
1. Consciousness (subjective experience, qualia)
2. Intentionality (the aboutness of mental content)
What would a conscious algorithm even look like?
Umm because it is open an open problem?
This is like saying the problem of dark matter is not a scientific problem because we don't know everything about it.
This is not correct to begin with. Incompleteness in this setting means that a formal system is either unprovable or indeed inconsistent (ie. wrong). A "known unknown" is just an oxymoron.
> that the consistency of sufficiently-complex systems cannot be proved
that sounds more like it. EDIT: I have to read up on it again and again ... the consistency of sufficiently-complex systems cannot be proved within that system.
Nope. Godel's Incompleteness Theorems (GIT) are metalogical[1]. They leak to any (sufficiently-complex) formal system. You can't logic them away by going up a level of abstraction.
Let's say you have a (sufficiently-complex) system A which is susceptible to GIT. You can describe A in terms of a meta-system B. By definition, B will also be susceptible to GIT. You can describe B in terms of C, and so on, but every meta-system that follows will still be susceptible to GIT.
[1] http://www.mbph.de/Logic/Para/Metalogic.pdf
http://www.scottaaronson.com/blog/?p=2756
One of the many reasons I admire Roger is that, out of all the AI skeptics on earth, he’s virtually the only one who’s actually tried to meet this burden, as I understand it! He, nearly alone, did what I think all AI skeptics should do, which is: suggest some actual physical property of the brain that, if present, would make it qualitatively different from all existing computers, in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the few AI skeptics who even understands what meeting this burden would entail: that you can’t do it with the physics we already know, that some new ingredient is necessary.
Penrose's argument is hollow. We understand the biophysics behind how the brain works. They aren't complicated at the level of detail you need to understand how the system works. We understand how neurons interact with each other. The evidence is consistent not only with our well settled understanding of chemical and biological systems, but also increasingly consistent with our development of information systems at scale.
The real gap is whether or not the totality of 'consciousness' is really just neural interactions at scale + starting state data, but the more we learn, the more that mystery vanishes. We understand the low-level perceptor->analysis models much better now, and we can map perceptor inputs at scale to outcomes in model tuning. In short, the consciousness of the gaps is rapidly losing his hiding spots.
Penrose's argument is taken seriously because we have collectively created a tremendous philosophical and institutional infrastructure around the idea of free will and the theory he attacks strongly implies there is some level of determinism in our cognitive systems. Since he is irreproachable on a personal or intellectual peerage level, he is a fantastic champion of this counter-cultural perspective.
However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
>However, if we apply even the barest of epistemological tools to the issue, we rapidly recognize that even if Penrose is correct, the chance of his position being accurate is so remote and so unverifiable so as to be useless.
You are saying that if he is right then it's probably not accurate?
>But the existence of a counter-argument against the deterministic mind itself, absent of its validity, is itself useful. It allows us to collectively hem and haw before changing our views and institutions to fit our understanding of how people work. Which means Penrose's argument is not going away anytime soon.
if the mind is deterministic how can we change our views - how can the position be useful? Things are, and you will, or will not.
There is common underlying assumption that the hard problem of consciousness is tied to high level cognitive or computational capabilities. I don't see the connection. The crux of being conscious is having the cognitive ability of being aware-consiouns-attentative-reflective at least tiny amount of time. If we could scientifically determine/agree what consciousness is, we should be able to make nice hyper-aware-of-blue-and-knowing-it robot and it would be relatively simple one.
Penrose starts from a specific conclusion, that formal systems are limited in ways that he clearly is not, and then searched for some way to explain the difference.
There are statements that are true in Z but not true in a non-standard model of the first order axioms. Such a true statement is not something that can't be proven true. It just can't be proven true using the first order system.
1. Let's suppose for sake of argument that humans really can see the inherent truth of "Peano Arithmetic is consistent". That doesn't mean humans violate Gödel's Incompleteness Theorem: it could just mean that humans use axioms stronger than PA.
2. Gödel's Incompleteness Theorem only applies to systems that are perfectly logically consistent. Not sure how Penrose didn't notice, but humans... aren't.
3. When scientists proposed Quantum Mechanics as a replacement for Classical Mechanics, it was on them to explain how Quantum Mechanics simplified to Classical Mechanics in the common case. "Penrose Mechanics" is an even more radical departure — especially from a physics of computation standpoint, as Penrose Mechanics by definition would allow solving at least some of the problems in (ALL - R) in ~polynomial time. Penrose needs to explain how Penrose Mechanics reduces to Quantum Mechanics in the common case.
4. Penrose proposes that (a) there exist new physics, (b) that evolution has learned to computationally exploit the new physics via microtubules, and yet (c) that humans are the only lineage to make use of this feature of microtubules, even though microtubules are found in all eukaryotic cells (from mushrooms to amoebae). From a predator-prey standpoint alone, it would seemingly be a huge evolutionary advantage to be able to compute NP or R functions in polynomial time. (That ability is not _strictly_ implied by Penrose Mechanics, but it's a very likely consequence.) Penrose needs to explain why only humans are taking advantage of the computational power of microtubules, when microtubules have existed for billions of years and across millions of species.
Why are humans not logically consistent then, if they are as materialists claim, something that can be abstracted with a computer program if we have full information of their workings?
When I say "systems that are perfectly logically consistent" and "humans... aren't", I'm saying that the ideas humans have in their heads are not logically consistent. It's possible to write down "2+2=5" on a piece of paper, even if 2 plus 2 doesn't actually equal 5, and it's likewise possible for humans to believe "2+2=5" even if 2 plus 2 doesn't equal 5.
Some years ago I met Rodney Brooks, back when he was doing insect robots. He was talking about a jump to human-level AI as his next project. I asked him, "Why not go for a mouse next? That might be within reach." He said "I don't want to go down in history as the man who created the world's greatest robot mouse." He went off to do Cog [1], a humanlike robot head that didn't do much. Then he backed off from human-level AI, did vacuum cleaners with insect-level smarts, and made some real money.
[1] https://en.wikipedia.org/wiki/Cog_(project)
There are no mice on HackerNews. I think.
We did, but is that really categorically different than groups of primates using primitive tools [1]?
I think the idea that humans are categorically different from other species is misguided. Instead consciousness and intelligence seem to be more continuous than discreet, particularly when looking at semi-intelligent animals like monkeys, dolphins and octopi. Animals in that class can all learn pretty complicated tasks and are able to make use of tools. Self-awareness and consciousness isn't something we understand fully enough to even exclude all animals from possessing.
The only thing that seems particularly unique about humans is our ability to use complex language and record it. Passing down knowledge from one generation to the next is the _only_ reason we have "technological civilization".
[1] http://www.bbc.com/earth/story/20150818-chimps-living-in-the...
Dennet argues that combining Darwin's strange inversion of reason (complexity from bottom up iterative refinement) with Turing's Universal Machine provides a way of understanding how we are machines, built of mahcines, built of machines, etc. and it is the heirarchy that allows the complexity of minds to emerge.
That heirarchical iterative schemes are unexpectedly powerful is well mirrored by the recent successes of deep neural nets, and Dennet cites Hinton.
It's worth a listen and summarises his new book From Bacteria to Bach.
[1] http://www.bbc.co.uk/programmes/b08kv3y4
I don't think he does either.
http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...
He does not seek to explain the origins of the universe but argues that this is not necessary to understand how minds could come about.
is there a 'why'? some people say it was 'created', but then the creator 'is and we can't say why'.
So, while it may or may not be true that the brain uses QM, it doesn't seem to really explain anything of interest. It doesn't make consciousness any less mysterious, or give any real insight into how we might create or understand our own consciousness.
Given that (or refute the premises, if you please), why is this theory interesting, relevant, or correct?
Quantum Computers are computers after all and he is talking about non-computable physics.
The reasons he looks at Quantum it's because it seems to be missing something fro our understanding(the "Reduction of the Unitary evolution"), and he "hopes" that this is non-computable.
Does that make sense?
E.g. let's suppose I make the claim that some people are conscious and others just seem like they're conscious. Can we test this?
Quite respectable people have theorized -- or should I say bloviated? -- that consciousness is merely an emergent property of complex systems. Again, we can't really define what consciousness is, so it's an interesting dinner party conversation starter, but it's not falsifiable.