This issue with Aaronson's response is that it comes from the perspective of first accepting Chalmers' "Hard Problem of consciousness". The "Hard Problem of consciousness", despite the name, is actually a statement of position. Briefly, it states that:
a) We have experiences, like being hungry, tasting a strawberry, seeing blue, etc.
b) It's possible to imagine a being which located food when hungry, ate when necessary, used colours to navigate the world, etc, but did not have these conscious experiences. To put it another way, what we've learnt so far about how the brain works gives us great insight into how we would eat, navigate, etc, but does not give us any insight into either how or why we would have conscious experiences.
c) Therefore conscious experiences are not explicable by physical brain processes.
This is a belief arising from an appeal to intuition and does not present a testable or falsifiable proposition.
Integrated Information Theory (which I am not a vigorous proponent of) posits that the experience of consciousness is related to the level of "integration" of a system. However, if you come to this while believing in the "Hard Problem", that cannot possibly be true, because IIT relates consciousness to physical properties of the system such as connectivity, but the "Hard Problem" defines consciousness as something which does not arise from any physical property of the system.
Aaronson's point later in the post that it is possible to construct a function which has an arbitrarily high phi but is nonetheless obviously not conscious-- which wrecks IIT completely, at least in its current formulation-- does not depend on anything to do with the hard problem of consciousness.
I don't think you need b) to make the argument work. You just need to point out that a) isn't present in physical theories, except as labels or correlations. The zombie argument is b), which is just one of several thought experiments to illustrate the argument being made, but it's not necessary to make the argument work. Chalmers, Nagel, McGinn and Block have all made arguments that don't rely on b).
Nagel states it most clearly in that science is the view from nowhere. The world doesn't feel like, taste like, look like anything on it's own, because those are creature-based sensations which depend on the kind of sensory organs and nervous systems an animal has.
I think the answer to a) and b) is actually quite obvious: we are not an automaton that simply eats when the need arises. The experience of hunger decouples our behavior from the need to eat, and we can plan according to the strength of our hunger relative to other needs.
We are not an automaton that simply eats a strawberry because it is edible. We are decision making organisms that can adjust the composition of our diet based on the chemical properties of the food. We can presume that there is an evolutionary advantage to being able to taste and therefore select from multiple dietary options.
It is clear that the conscious experiences as described are extremely subtle forms of information that allow us to plan and make decisions based on the information they provide us, and not simply blindly react to the world, and I think it is pretty obvious that that is a massive advantage and also more in line with my experience as a conscious being.
Looks like a hyperbolic flaw in reasoning. The "Hard Problem", as described, seems more to define consciousness as something which does not arise exclusively from (known) physical properties of the system.
In what reality would the phenomenom of consciousness be defined without any underlying physical property? Seems that study would aptly be called "metaphysics".
that said i'm still keen to see more/better discussion of IIT, and/or more modern extensions. IIT is certainly quantitative, and arguably elegant, despite problems. so it puzzles me how eager some people are to just junk it rather than repairing it.
I rarely see IIT characterized as complete junk. Personally, I don't see much value in it as I don't think it answers, or even grapples with, the hard problem of consciousness. Tell me how integrated information gives rise to subjective experience and maybe I'll start buying what they're selling.
I was nodding along to everything until they got to the 4th and 5th axioms:
> Integration: ... seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book
What? Why is this taken as a given? This is not at all evident. You could show someone a blue book and then ask them to imagine that same book, but with a red cover instead, and many people would feel perfectly comfortable doing this, having some experience close to actually seeing that red book. If that doesn't straight-up prove this axiom wrong (I personally think it does), then it at least shows that this axiom is not clearly "evident".
> Exclusion: my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so
Have these people never been bored? Have they never heard of how "time slows down" during a car crash? This is not only non-evident, it's another one that seem patently false if you actually talk to any human beings about their own experiences.
From someone who has dabbled in information theory (the real one), I am just as confused as you.
What I have observed in the past decade is that calling things “information theory of something” makes it somehow more palatable for a broader audience.
I don't think IIT is a correct theory or even on the right track. Nevertheless I fail to see how it's "silly". It's no sillier than any other putative scientific theory of consciousness -- less silly, since IIT actually makes a testable prediction.
I don't see what makes them silly. Metaphysics is hard, and speculative realism is a proposed answer to modern transcendental idealism stemming from Kant, where the worry is that we can't say objective things about the world independent of human thought. Things like dinosaurs existing before humans evolved is seen as correlated to our experiences with fossils in the ground, and not an objective truth about the universe. Speculative realism is a way around that while respecting the philosophical arguments of the Kantians.
Our mental experiences seem to be subjective, so what prospect is there of making any objective statement about the world if correlations are inadequate for the purpose?
How does déja vu, that is the re-experiencing an experience, fit into this theory? It appears that the Information axioms fails to be Essential when one considers déja vu.
I don't see that deja vu poses a problem. Deja vu is simply a feeling of familiarity associated to an experience. It doesn't involve actually having an experience more than once.
Possibly - but I could just claim that for me déja vu is exactly when two experiences are not differentiable from one another. And if someone claims to have experienced this moment before, identically, how would I refute their claim. It's their experience after all.
https://scottaaronson.blog/?p=1799
a) We have experiences, like being hungry, tasting a strawberry, seeing blue, etc.
b) It's possible to imagine a being which located food when hungry, ate when necessary, used colours to navigate the world, etc, but did not have these conscious experiences. To put it another way, what we've learnt so far about how the brain works gives us great insight into how we would eat, navigate, etc, but does not give us any insight into either how or why we would have conscious experiences.
c) Therefore conscious experiences are not explicable by physical brain processes.
This is a belief arising from an appeal to intuition and does not present a testable or falsifiable proposition.
Integrated Information Theory (which I am not a vigorous proponent of) posits that the experience of consciousness is related to the level of "integration" of a system. However, if you come to this while believing in the "Hard Problem", that cannot possibly be true, because IIT relates consciousness to physical properties of the system such as connectivity, but the "Hard Problem" defines consciousness as something which does not arise from any physical property of the system.
Nagel states it most clearly in that science is the view from nowhere. The world doesn't feel like, taste like, look like anything on it's own, because those are creature-based sensations which depend on the kind of sensory organs and nervous systems an animal has.
We are not an automaton that simply eats a strawberry because it is edible. We are decision making organisms that can adjust the composition of our diet based on the chemical properties of the food. We can presume that there is an evolutionary advantage to being able to taste and therefore select from multiple dietary options.
It is clear that the conscious experiences as described are extremely subtle forms of information that allow us to plan and make decisions based on the information they provide us, and not simply blindly react to the world, and I think it is pretty obvious that that is a massive advantage and also more in line with my experience as a conscious being.
that said i'm still keen to see more/better discussion of IIT, and/or more modern extensions. IIT is certainly quantitative, and arguably elegant, despite problems. so it puzzles me how eager some people are to just junk it rather than repairing it.
> Integration: ... seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book
What? Why is this taken as a given? This is not at all evident. You could show someone a blue book and then ask them to imagine that same book, but with a red cover instead, and many people would feel perfectly comfortable doing this, having some experience close to actually seeing that red book. If that doesn't straight-up prove this axiom wrong (I personally think it does), then it at least shows that this axiom is not clearly "evident".
> Exclusion: my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so
Have these people never been bored? Have they never heard of how "time slows down" during a car crash? This is not only non-evident, it's another one that seem patently false if you actually talk to any human beings about their own experiences.
(Physics A.B. from Harvard & PhD. from UC Berkeley, FWIW)
https://en.wikipedia.org/wiki/Object-oriented_ontology
https://en.wikipedia.org/wiki/Speculative_realism
What is it and why is it silly?
They are ideas from people who don't like Anthropocenterism, which Integrated Information Theory is also opposed to.
It's worth noting that all of the people who believe in any of this are philosophical wingcucks like Nick Land.