My personal preference is for panexperientialism (essentially a bare bones panpsychism), with the idea that the phenomenon is somehow connected with change, and belongs to anything capable of change in some way and is probably a fundamental property of nature, as fundamental as space or time.
It's important to divorce consciousness from all ideas of "thought", "will" etc. to consider it's essence which is more connected with "awareness of being", though even that is too complex I think.
Obviously this is complete conjecture, but it has growing philosophical support - at least as an idea worth discussing - I think.
Probably because consciousness requires some kind of underlying mental processes, which take time to achieve. I can only guess that it's similar to an instance of a computer program such as a web browser, i.e., it isn't a "thing" itself but is built out of a multitude of underlying calculations on a physical computer.
This also implies to me that consciousness, not being a physical thing itself, comes and goes within the brain, with the sleep cycle or just through lack of attention. The only interaction between one instance of consciousness and the following one would then be via memories.
Why not entertain the notion that matter is consciousness? It doesn't just feel like something to be a particular atom, that particular atom exists because it feels like something to be that particular atom.
If my consciousness is only the consciousness of a particular atom, or more likely a fundamental particle like an electron, why do I have the illusion that I'm an entire human animal? How would my vision, for example, be sent to every election in my body, for it to be independently conscious?
Manzotti’s account (as described in the article) ignores dreams, or the fact that directly zapping the brain elicits experience (e.g. magnetic stimulation of the visual cortex triggers colorful flashes known as phosphenes).
Graziani’s approach is more interesting, as a theory of attention, but falls short on qualia, and focuses on peculiarities of the human brain, assuming that a cerebral cortex is necessary for generating a conscious experience.
My pet theory is that consciousness can be modeled as a mathematical dual of the physical world. Think Voronoi diagrams vs. Delaunay triangulation. They are distinct, imbued with their own properties, but inextricably linked in that you can generate one from the other.
You could call dreams something like running your mental models on training sets. If it’s physical when it’s electrons and silicon, can it be physical in the brain as well?
What makes you believe that your conscious perceptions have any specific (let alone regularizeable) kind of relationship to reality at all? In all likelihood, conscious perceptions are guided entirely by evolution, emphasizing those aspects of reality needed to keep us alive and filtering out those we can safely ignore.
> conscious perceptions are guided entirely by evolution, emphasizing those aspects of reality needed to keep us alive and filtering out those we can safely ignore.
It's also known that it's often enough evolutionary advantageous having the model of the surroundings which is more accurate than that of your competition (There's some paper I've read about that, I just don't have the time to find it now. Maybe somebody has some more ready).
Therefore the successful products of evolution correctly reflect "outside world" in their models, and even have the "safety mechanisms" and "error correction" facilities (based on the feedback, of course). There are even experiments with people: if you'd get the glasses that invert the picture you see, you'd for a while see the world top down, but if you wear them long enough, the internal adjustment of the model would happen and you'd see again up as up, even if the "signal" is provably reversed compared to what you received for your whole life.
On the opposite side (that the "accurate enough" is not "always correct") we also already know the examples where the "accuracy" breaks in humans: that's the cause of people ascribing to the agency of gods the phenomena with purely natural causes. There wasn't evolutionary need for an intrinsic development of more accurate model of these phenomena (e.g. what makes the Sun move across the sky or what the stars are). The scientific method, luckily, allowed humans as a group to overcome these limitations, e.g. Aristarchus, some 2200 years ago, that is, at least 200 years BCE: https://en.wikipedia.org/wiki/On_the_Sizes_and_Distances_(Ar... (note, at that time the term "science" still didn't exist).
I think that my consciousness (that perpetual “now” we’re trapped in) happens in my brain based on information about the past state of the world. I know it isn’t a perfect rendition (see illusions, hallucinations, cognitive biases and other brain farts). I totally go by Anais Nin’s “We don’t see the word as it is, we see the world as we are”.
I know it has anything to do with the rest of the world through the impact of my actions.
The closest we will ever come to describing consciousness is simply describing the correlates of consciousness. The "ultimate cause" of it will forever be a mystery, behind the veil.
Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
This is why it is called the "hard problem" of consciousness. In principle, there is no framework of deduction or reasoning by which we can explain the emergence of qualia.
>Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness.
Only in the same ways that e.g. emotions exist 'outside of the physical world' and we are doing some work with those (e.g. we know more about the effects of hormones on them now).
I completely disagree that this is unstudiable or 'behind the veil'. We can create beings with consciousness (babies) using a purely physical process, of course there is some way to learn more.
Personally, I assume the main problem is (as it so often happens) that 'consciousness' is too loosely defined and explaining it will be easier with more strict definitions and a deeper understanding of the brain and body.
The non-materialist argument is similar to creationist critiques of evolution. Scientists who study these things see a material explanation, but because these are enormously complex emergent properties they don't know the exact details. Critics then use a god of the gaps argument, saying that if a material explanation can't explain everything right now then a supernatural explanation (which which explains far less and runs counter to available evidence) should be favored. Behind both arguments lie an unease that the scientific approach knocks humans off their pedestal and makes them seem like everything else.
It's not surprising that the only neuroscientist in the article is arguing for a materialist approach.
I don't think trying to solve it materialistically will work, because the moment we come up with an empirical measurement of consciousness, David Chalmers will crash in through a window and say "ah, now consider a hypothetical class of people for whom this measurement indicates consciousness, but who do not in fact experience qualia - we could call these qualia zombies, or Q-zombies for short."
Maybe the correct response is just to dismiss zombie theories as incoherent. But people are already doing that - I doubt collecting more physical evidence and improving our understanding of cognition will strengthen the case against P-zombies, even though it'd be useful knowledge for other reasons.
I wouldn't say that it is 'beyond us' so much as it is simply unexplainable by empirical models of reality. Empiricism is a bit like Newtonian gravitation, it isn't wrong but it isn't complete either. It's currently the predominant philosophical model human society as a whole, but it wasn't always so and there's no reason to expect it always will be.
See, this seems entirely obvious to me but so many people are stuck with a 100% materialism worldview that they expect their to be some scientific "explanation" for consciousness. And that truly baffles me.
A materialistic approach to explaining consciousness is necessary, since consciousness can be affected by physical changes to the brain, which implies that consciousness is fundamentally a physical process.
I tend to think the same. But then I always question whether I just lack imagination in how to formulate the problem scientifically. Because it does feel a bit like an excuse.
I'll bite. I'd take a body running a simulation of the environment to compare possible outcomes of its actions. At some point along the path driven by evolution the body manages to put himself into the simulation provoking a whole spectrum of interesting effects keeping this in sync.
I don't know how to quote your post, so forgive me if I'm doing it wrong...
> The closest we will ever come to describing consciousness is simply describing the correlates of consciousness. The "ultimate cause" of it will forever be a mystery, behind the veil.
I don't agree about it being forever a mystery.
> Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
I do like to think about this. A lot. I compare it to the experience of using a computer. Turn it on and supply it with an operating system and all of the sudden there is a "there" there, even thI don't know how to quote your post, so forgive me if I'm doing it wrong...
> The closest we will ever come to describing consciousness is simply describing the correlates of consciousness. The "ultimate cause" of it will forever be a mystery, behind the veil.
I don't agree about it being forever a mystery.
> Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
I do like to think about this. A lot. I compare it to the experience of using a computer. Turn it on and supply it with an operating system and all of the sudden there is a "there" there, even though physically there is not. I'm not an OS person, kernel-wise, and I've not had any conversations with them on this topic. But I imagine them to be too close to the topic... unable to see the forest through the trees. Take a step back and look at it. Amazing things can happen in a place that is not physically there. Maybe looking at the 80x25 isn't so spectacular (although I am a CLI guy at heart), but think about VR. There is certainly a there there. Simulated, sure, but it exists nowhere in the physical world.
To me, that scenario is something akin to, but certainly not the equivalent of, consciousness. That scenario is caused by electronics... chips and electricity. Physically damage the chips or turn off the electricity and it certainly changes, or even disappears. Yet, that simulation isn't physically extant.
ough physically there is not. I'm not an OS person, kernel-wise, and I've not had any conversations with them on this topic. But I imagine them to be too close to the topic... unable to see the forest through the trees. Take a step back and look at it. Amazing things can happen in a place that is not physically there. Maybe looking at the 80x25 isn't so spectacular (although I am a CLI guy at heart), but think about VR. There is certainly a there there. Simulated, sure, but it exists nowhere in the physical world.
To me, that scenario is something akin to, but certainly not the equivalent of, consciousness. That scenario is caused by electronics... chips and electricity. Physically damage the chips or turn off the electricity and it certainly changes, or even disappears. Yet, that simulation isn't physically extant.
"It's very hard to change people's minds about something like consciousness, and I finally figured out the reason for that. The reason for that is that everybody's an expert on consciousness. (...) With regard to consciousness, people seem to think, each of us seems to think, "I am an expert. Simply by being conscious, I know all about this." And so, you tell them your theory and they say, "No, no, that's not the way consciousness is! No, you've got it all wrong." And they say this with an amazing confidence."
"A lot of people are just left completely dissatisfied and incredulous when I attempt to explain consciousness. So this is the problem. So I have to do a little bit of the sort of work that a lot of you won't like, for the same reason that you don't like to see a magic trick explained to you. How many of you here, if somebody -- some smart aleck -- starts telling you how a particular magic trick is done, you sort of want to block your ears and say, "No, no, I don't want to know! Don't take the thrill of it away. I'd rather be mystified. Don't tell me the answer." A lot of people feel that way about consciousness."
I think this is why the Socratic method is so effective. When Socrates would debate with someone, he would almost never just state claims as if he knew the answers. Rather, he would always act as if his audience was the expert and he himself knew nothing. Acting in that way, he would probe the audience. Invariably, the audience would begin answering questions as if they knew all the answers; and invariably, Socrates would lead them around until they tripped over their own feet and realized they didn't know what they were talking about.
On reading what was then the top comment on this article, I saw that it asserted as undeniably true a highly debatable proposition. I was about to reply, when I saw that the second did the same. The third was more of an aphorism that looked profound until you thought about it... There do seem to be a lot of people who want our consciousness to remain a mystery. I guess that it is vitalism's last stand.
FWIW, I think there is a 'hard problem' (more than one, in fact), but not the one Chalmers identifies; perhaps the hardest is 'how come we (think we) have free will?'
That’s easier than you think: what those who use that term for discussions understand that term to mean is mostly based on work made by religious apologists through the centuries. So the answer to “how come” is “to somehow excuse the concept of having all-mighty all-knowing god while people still do whatever.” Thus so constructed “free will” is the limit over which that all mighty can’t cross. That’s why it’s so emotionally defended.
We have had some success in understanding all sorts of phenomena that are too complex to explain in all their detail: we have a useful understanding of the meteorology of snowstorms, for example, even though we cannot give an accounting for the exact shape of every single snowflake. You have given us no reason to be sure that an understanding of this form cannot be achieved for consciousness.
To anticipate one possible reply: self-referentiality (our mind studying itself) is not self-evidently a barrier, as one mind can study another.
> That's easy to disprove. A calculator isn't sophisticated enough to simulate itself, but a modern computer can perfectly simulate itself.
What you say is pretty obviously false: a computer with finite memory cannot simulate itself in a general case because that would require more memory than it actually has or a compression algorithm that's effective on random data, because its memory would need to contain the emulation program plus a full memory image. Computers with infinite memory do not exist and random data is not compressible, therefore modern computers cannot perfectly simulate themselves. QED.
This is one of the many reasons why I love science fiction. Many of my favorite novels center around consciousness and the related technology to assist/enable/enhance it. Reading these in my younger years, many of these novels have shaped my life, especially my career.
I may be confusing my authors, but some of the more recent novels I've read (from about 10 years or so ago, it's been a while), I think from Alastair Reynolds, have a wide range of ideas. Some center around the same kind of thing that Elon Musk talks about with Neuralink. Others take a wildly different path... putting an extant brain in a box or cabinet on wheels.
On this topic I used to be a huge fan of the concept of uploading. But then I formed the opinion that, if all we're doing is copying state, then the new instance is not the old inshttp://www.rudyrucker.com/wares/tance. It's just another instance with its own state from that point forward. I think it's also a Reynolds book where a person creates a copy of their consciousness and puts it into a very physically small spacecraft in order to travel a maximum speed to a very distance place (I forget the intended task). But upon return the two instances and ended up becoming antagonists due to their different experiences in the meantime.
Likewise, I want to say it was a Rucker book, a person is copied, and the copy is not them, just a new instance.
That kind of soured me on uploading. However, at the same time, it seems to me that can be akin to giving birth to the next generation. A gift of sorts. Maybe we ourselves can not directly enjoy the benefits, but possibly we can gift that possibility to our descendants.
I am particularly fond of old cyberpunk takes on this topic. Gibson and his wild cyberspace characters... the Oracle and Papa Legba, the self-aware pimpmobile, and the end of the one book where entities jumped out of all the fax machines around the world... good times.
What if it were possible to probe a single neuron and copy its exact functioning - that is, the actual neuron and the artificial copy both act the same to the same inputs, and produce the same outputs. Not only that, but this artificial neuron, once fully copied and functioning, could then be inserted in parallel with the original. Then - kill the original.
So there's now this artificial neuron (it doesn't have to be inside the actual brain, either!) working exactly like the original. In fact, let's say this artificial neuron does exist outside the natural brain (and let's ignore any propagation delays or whatnot, though in reality, anything we did with electronics would be vastly faster than actual neuronal signal speeds).
So - we have "copied" (or "uploaded" if simulated with software) a neuron from the brain to a new place outside of that brain.
From the brain's perspective - everything is the same.
Do it over and over and over again - until all the neurons are copied from brain to outside of it.
Again - from the brain's perspective, everything is the same - but now it is completely artificial - and may even be running as a simulation in some fashion.
Now - we did this "one neuron at a time" - but how is that fundamentally different than if we could (somehow) make a copy "all at once in parallel" (something similar to the transporter of Star Trek) - then killed the original?
Of course - if that copy and the original existed and were aware at the same time - their experiences would diverge - but what if the copy was instead "wired" to the same inputs and such (that is, in parallel) to the original brain. In short, kinda like the original way we were copying and killing neurons, but this time, instead of killing the neurons (again, wired in parallel), we let them live, then killed them all at once at the end.
Since both sets are receiving the same inputs and producing the same outputs - where is the "being" or the "consciousness" at? Is it only in the natural brain - or in the artificial? Both at the same time? If we killed one, but not the other - where is the being now? Does it matter which we kill?
We could do the opposite - kill off one of the artificial neurons - and the being should still be ok, right? But what if we randomly selected which we killed - artificial one time, natural another - but since they are all wired together in the same manner and were operating in the same manner in parallel - now where is the "being"?
So - does it matter if we kill off the natural neurons in serial vs parallel? Furthermore, assuming everything is wired together in parallel - would copying everything, then killing off the natural side matter? At what point and "how" does the "being" transfer from one side to the other? Furthermore, how fast must the natural side be killed or shut off - and if there is a disconnect between the two sides - does that matter? Like - if the natural side is disconnected from the copy then a nanosecond later is killed - is the being now still in the artificial copy? What does the being experience in all of this?
The funny thing is - something like this already happens - naturally - to our bodies every day and over time. But we retain the concept of "self" and "being". But it happens slowly, and it doesn't happen "all at once" - a copy isn't made and then the original killed off, but rather cells die and are replaced (maybe not perfectly - leading to aging, disease, and possibly death) over the course of time - but by the above thought experiments - does that really matter, especially if it were done quick enough?
Like - imagine a single brain - but connected to two separate but identical bodies. When one blinks, the other blinks as well. Sever the connection with one of the bodies - the being in the brain should "go" with the body still connected, right? So if there are two brains, connected to the same body - and they are both operating in identical fashion - where is the being? Which brain? Both?
Again - this is all a thought experiment - which has been explored in depth by many people for quite a long while. It has been explored by science fiction several times. In both thought arenas, different conclusions have been made over what really happens - or might happen. But really, no one can say to know the answer.
There is also an unending quest to explain foobar. Hard to explain something that's not defined. We can still talk about what meaning we put into this term. My favorite analogy is the flow of electrons in a processor chip is its consciousness and the algorithm it's performing is its cognition. Using this analogy, consciousness is the process that updates our world, i.e. the process that makes a photon move forward, while cognition is the logical interpretation of these updates.
Author claims to "have been reading around in the field of consciousness studies for over two decades" + doesn't mention neither Giulio Tononi nor Karl Friston => comes of as kind of clueless ?
It's important to divorce consciousness from all ideas of "thought", "will" etc. to consider it's essence which is more connected with "awareness of being", though even that is too complex I think.
Obviously this is complete conjecture, but it has growing philosophical support - at least as an idea worth discussing - I think.
http://www.eoht.info/page/Panexperientialism
This also implies to me that consciousness, not being a physical thing itself, comes and goes within the brain, with the sleep cycle or just through lack of attention. The only interaction between one instance of consciousness and the following one would then be via memories.
Assuming you mean cognition, this is essentially the opposite of the idea of panexperientialism, or at least not compatible.
that particular atom exists because it feels like something to be that particular atom
seems to beg the question somewhat - where is "feeling like something" coming from?
Graziani’s approach is more interesting, as a theory of attention, but falls short on qualia, and focuses on peculiarities of the human brain, assuming that a cerebral cortex is necessary for generating a conscious experience.
My pet theory is that consciousness can be modeled as a mathematical dual of the physical world. Think Voronoi diagrams vs. Delaunay triangulation. They are distinct, imbued with their own properties, but inextricably linked in that you can generate one from the other.
It may be similar to Russelian monism. A theory that I'm most aligned to. https://plato.stanford.edu/entries/russellian-monism/
This sounds right to me. More examples: syntax versus semantics, theory versus model, algebra versus geometry.
Here's something I discovered recently: "Meaning and Duality From Categorical Logic to Quantum Physics" by Yoshihiro Maruyama http://www.cs.ox.ac.uk/people/bob.coecke/Yoshi.pdf
Deleted Comment
https://www.nature.com/articles/544296a?WT.mc_id=FBK_NA_1704...
It's also known that it's often enough evolutionary advantageous having the model of the surroundings which is more accurate than that of your competition (There's some paper I've read about that, I just don't have the time to find it now. Maybe somebody has some more ready).
Therefore the successful products of evolution correctly reflect "outside world" in their models, and even have the "safety mechanisms" and "error correction" facilities (based on the feedback, of course). There are even experiments with people: if you'd get the glasses that invert the picture you see, you'd for a while see the world top down, but if you wear them long enough, the internal adjustment of the model would happen and you'd see again up as up, even if the "signal" is provably reversed compared to what you received for your whole life.
On the opposite side (that the "accurate enough" is not "always correct") we also already know the examples where the "accuracy" breaks in humans: that's the cause of people ascribing to the agency of gods the phenomena with purely natural causes. There wasn't evolutionary need for an intrinsic development of more accurate model of these phenomena (e.g. what makes the Sun move across the sky or what the stars are). The scientific method, luckily, allowed humans as a group to overcome these limitations, e.g. Aristarchus, some 2200 years ago, that is, at least 200 years BCE: https://en.wikipedia.org/wiki/On_the_Sizes_and_Distances_(Ar... (note, at that time the term "science" still didn't exist).
I know it has anything to do with the rest of the world through the impact of my actions.
Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
This is why it is called the "hard problem" of consciousness. In principle, there is no framework of deduction or reasoning by which we can explain the emergence of qualia.
It is beyond us.
Only in the same ways that e.g. emotions exist 'outside of the physical world' and we are doing some work with those (e.g. we know more about the effects of hormones on them now).
I completely disagree that this is unstudiable or 'behind the veil'. We can create beings with consciousness (babies) using a purely physical process, of course there is some way to learn more.
Personally, I assume the main problem is (as it so often happens) that 'consciousness' is too loosely defined and explaining it will be easier with more strict definitions and a deeper understanding of the brain and body.
It's not surprising that the only neuroscientist in the article is arguing for a materialist approach.
Maybe the correct response is just to dismiss zombie theories as incoherent. But people are already doing that - I doubt collecting more physical evidence and improving our understanding of cognition will strengthen the case against P-zombies, even though it'd be useful knowledge for other reasons.
Et voilà, it became conscious.
> The closest we will ever come to describing consciousness is simply describing the correlates of consciousness. The "ultimate cause" of it will forever be a mystery, behind the veil.
I don't agree about it being forever a mystery.
> Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
I do like to think about this. A lot. I compare it to the experience of using a computer. Turn it on and supply it with an operating system and all of the sudden there is a "there" there, even thI don't know how to quote your post, so forgive me if I'm doing it wrong...
> The closest we will ever come to describing consciousness is simply describing the correlates of consciousness. The "ultimate cause" of it will forever be a mystery, behind the veil.
I don't agree about it being forever a mystery.
> Consciousness appears to exist outside of the physical world, in that we can describe a physical process entirely without invoking consciousness. Because of this, consciousness is beyond the scientific method and our fundamental understanding in principle, not just in practice.
I do like to think about this. A lot. I compare it to the experience of using a computer. Turn it on and supply it with an operating system and all of the sudden there is a "there" there, even though physically there is not. I'm not an OS person, kernel-wise, and I've not had any conversations with them on this topic. But I imagine them to be too close to the topic... unable to see the forest through the trees. Take a step back and look at it. Amazing things can happen in a place that is not physically there. Maybe looking at the 80x25 isn't so spectacular (although I am a CLI guy at heart), but think about VR. There is certainly a there there. Simulated, sure, but it exists nowhere in the physical world.
To me, that scenario is something akin to, but certainly not the equivalent of, consciousness. That scenario is caused by electronics... chips and electricity. Physically damage the chips or turn off the electricity and it certainly changes, or even disappears. Yet, that simulation isn't physically extant.
ough physically there is not. I'm not an OS person, kernel-wise, and I've not had any conversations with them on this topic. But I imagine them to be too close to the topic... unable to see the forest through the trees. Take a step back and look at it. Amazing things can happen in a place that is not physically there. Maybe looking at the 80x25 isn't so spectacular (although I am a CLI guy at heart), but think about VR. There is certainly a there there. Simulated, sure, but it exists nowhere in the physical world.
To me, that scenario is something akin to, but certainly not the equivalent of, consciousness. That scenario is caused by electronics... chips and electricity. Physically damage the chips or turn off the electricity and it certainly changes, or even disappears. Yet, that simulation isn't physically extant.
"A lot of people are just left completely dissatisfied and incredulous when I attempt to explain consciousness. So this is the problem. So I have to do a little bit of the sort of work that a lot of you won't like, for the same reason that you don't like to see a magic trick explained to you. How many of you here, if somebody -- some smart aleck -- starts telling you how a particular magic trick is done, you sort of want to block your ears and say, "No, no, I don't want to know! Don't take the thrill of it away. I'd rather be mystified. Don't tell me the answer." A lot of people feel that way about consciousness."
https://www.ted.com/talks/dan_dennett_the_illusion_of_consci...
"in fact, you're not the authority on your own consciousness that you think you are."
The paper: "Explaining the "magic" of consciousness", Daniel Dennett, 2003:
https://ase.tufts.edu/cogstud/dennett/papers/explainingmagic...
FWIW, I think there is a 'hard problem' (more than one, in fact), but not the one Chalmers identifies; perhaps the hardest is 'how come we (think we) have free will?'
That’s easier than you think: what those who use that term for discussions understand that term to mean is mostly based on work made by religious apologists through the centuries. So the answer to “how come” is “to somehow excuse the concept of having all-mighty all-knowing god while people still do whatever.” Thus so constructed “free will” is the limit over which that all mighty can’t cross. That’s why it’s so emotionally defended.
To anticipate one possible reply: self-referentiality (our mind studying itself) is not self-evidently a barrier, as one mind can study another.
What you say is pretty obviously false: a computer with finite memory cannot simulate itself in a general case because that would require more memory than it actually has or a compression algorithm that's effective on random data, because its memory would need to contain the emulation program plus a full memory image. Computers with infinite memory do not exist and random data is not compressible, therefore modern computers cannot perfectly simulate themselves. QED.
I may be confusing my authors, but some of the more recent novels I've read (from about 10 years or so ago, it's been a while), I think from Alastair Reynolds, have a wide range of ideas. Some center around the same kind of thing that Elon Musk talks about with Neuralink. Others take a wildly different path... putting an extant brain in a box or cabinet on wheels.
On this topic I used to be a huge fan of the concept of uploading. But then I formed the opinion that, if all we're doing is copying state, then the new instance is not the old inshttp://www.rudyrucker.com/wares/tance. It's just another instance with its own state from that point forward. I think it's also a Reynolds book where a person creates a copy of their consciousness and puts it into a very physically small spacecraft in order to travel a maximum speed to a very distance place (I forget the intended task). But upon return the two instances and ended up becoming antagonists due to their different experiences in the meantime.
Likewise, I want to say it was a Rucker book, a person is copied, and the copy is not them, just a new instance.
That kind of soured me on uploading. However, at the same time, it seems to me that can be akin to giving birth to the next generation. A gift of sorts. Maybe we ourselves can not directly enjoy the benefits, but possibly we can gift that possibility to our descendants.
I am particularly fond of old cyberpunk takes on this topic. Gibson and his wild cyberspace characters... the Oracle and Papa Legba, the self-aware pimpmobile, and the end of the one book where entities jumped out of all the fax machines around the world... good times.
Also Asimov and his robot-focused series.
I digress
What if it were possible to probe a single neuron and copy its exact functioning - that is, the actual neuron and the artificial copy both act the same to the same inputs, and produce the same outputs. Not only that, but this artificial neuron, once fully copied and functioning, could then be inserted in parallel with the original. Then - kill the original.
So there's now this artificial neuron (it doesn't have to be inside the actual brain, either!) working exactly like the original. In fact, let's say this artificial neuron does exist outside the natural brain (and let's ignore any propagation delays or whatnot, though in reality, anything we did with electronics would be vastly faster than actual neuronal signal speeds).
So - we have "copied" (or "uploaded" if simulated with software) a neuron from the brain to a new place outside of that brain.
From the brain's perspective - everything is the same.
Do it over and over and over again - until all the neurons are copied from brain to outside of it.
Again - from the brain's perspective, everything is the same - but now it is completely artificial - and may even be running as a simulation in some fashion.
Now - we did this "one neuron at a time" - but how is that fundamentally different than if we could (somehow) make a copy "all at once in parallel" (something similar to the transporter of Star Trek) - then killed the original?
Of course - if that copy and the original existed and were aware at the same time - their experiences would diverge - but what if the copy was instead "wired" to the same inputs and such (that is, in parallel) to the original brain. In short, kinda like the original way we were copying and killing neurons, but this time, instead of killing the neurons (again, wired in parallel), we let them live, then killed them all at once at the end.
Since both sets are receiving the same inputs and producing the same outputs - where is the "being" or the "consciousness" at? Is it only in the natural brain - or in the artificial? Both at the same time? If we killed one, but not the other - where is the being now? Does it matter which we kill?
We could do the opposite - kill off one of the artificial neurons - and the being should still be ok, right? But what if we randomly selected which we killed - artificial one time, natural another - but since they are all wired together in the same manner and were operating in the same manner in parallel - now where is the "being"?
So - does it matter if we kill off the natural neurons in serial vs parallel? Furthermore, assuming everything is wired together in parallel - would copying everything, then killing off the natural side matter? At what point and "how" does the "being" transfer from one side to the other? Furthermore, how fast must the natural side be killed or shut off - and if there is a disconnect between the two sides - does that matter? Like - if the natural side is disconnected from the copy then a nanosecond later is killed - is the being now still in the artificial copy? What does the being experience in all of this?
The funny thing is - something like this already happens - naturally - to our bodies every day and over time. But we retain the concept of "self" and "being". But it happens slowly, and it doesn't happen "all at once" - a copy isn't made and then the original killed off, but rather cells die and are replaced (maybe not perfectly - leading to aging, disease, and possibly death) over the course of time - but by the above thought experiments - does that really matter, especially if it were done quick enough?
Like - imagine a single brain - but connected to two separate but identical bodies. When one blinks, the other blinks as well. Sever the connection with one of the bodies - the being in the brain should "go" with the body still connected, right? So if there are two brains, connected to the same body - and they are both operating in identical fashion - where is the being? Which brain? Both?
Again - this is all a thought experiment - which has been explored in depth by many people for quite a long while. It has been explored by science fiction several times. In both thought arenas, different conclusions have been made over what really happens - or might happen. But really, no one can say to know the answer.