There are a couple traps to be aware of with this article.
1. "Bioelectricity"
This is a generic term which doesn't capture the nuance of charge gradients and chemical gradients in cells. While you can directly apply charges to interact with gradient based biological systems, this is a brute force method. Cells have chemically selective walls. So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple.
2. Replacement
This one is a bit more subtle. If you find that you can affect a system by one means that is not the same thing as saying the means is the cause. Take the example of using RNA to transfer memory from one Aplysia to another. Immediately after transfer the recipient does not have the memory. It takes time for the introduced RNA to affect sensory cells so that they become more sensitive to stimulation. This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.
In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.
> Said another way 'bioelectricity' is not simple.
> If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.
I'm not sure these two statements are compatible. The first is definitely true, and rna does function on a slower timescale. We can't be 100% confident that some of the complexity we don't understand in the first statement wouldn't have an impact in the second scenario, can we?
I am not sure I would call RNA transferring regulatory programs "memory". This looks more like epigenetic transfer than what we would call memory (IE, factual recall). My training was before the more recent work with Aplysia, but "RNA memory transfer in planaria" was presented as an example of "how to make big claims with irreproducible experiments" in grad school.
I appreciate that epigenetics is a well-established field at this point but I worry people conflate its effects with other phenomena.
I tend to agree that the word "memory" makes me think of a higher level (more abstract) type of action than a simple reactive switch, I'm not sure where the line is or if there really needs to be one or not.
Having said that, are you familiar with the purkinje cell from a rabbit that they trained to respond to timed patterns of input in isolation?
Timed pattern=input 1, delay X, input 2, delay Y, then input 3.
Definitely more than a simple on/off switch type training, but does that rise to the level of "memory"?
> This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.
Not if you removed the DNA. Epigenetic changes to the DNA are what maintain the synapse at it's "learned" state. Here's a link:
Additional note: I forgot that synapses are also maintained by local RNA (local=at/near the synapse), so removing the RNA would definitely cause the synapse to revert back to a different state and not retained it's "learned" state.
I also want to know how much of this was replicated by independent, skeptical sources looking for alternative explanations. One thing I see in “science” reporting is that one or a few people make wild claims, it hits the news, and people believe their word on faith with no replication. There’s also many statements about what we know where the claims made should have citations, too. Yet, people who have never run experiments like that are nodding along saying, “Of course it’s true.”
Or was all this replicated? What strengths and weaknesses did they hypothesize in these studies? What did they prove or disprove? What’s the next steps? And can we already implement any of those in simulators?
(Note: I think agents poking and prodding the world can definitely be implemented in simulators. Even primitive, game engines should be able to model some of that.)
> In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.
You can say the same thing about computer systems - as long as you don't understand the underlying logic. If you don't understand that the chemistry of transistors doesn't matter as much as the C code, you can say exactly the same critique about how a thinkpad works: "So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple....In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions."
Once you do understand the logic - the 'why' of von neumann machines and Javascript and transistors, it's clear that your claim isn't true and there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not and we're stuck debating the bioequivalent of the fundamental computational significance of the clock cycle speed of a CPU.
I have a very rudimentary understanding of how electricity and electronic circuitry and transistor work, but it does make me wonder:
We use programming languages like C to create complex branching algorithms that are turned a linear machine code tape. Programmers generally can not understand assembly even if they understand the branching code that is turned into assembly. Even if assembly had variables, just the fact that if/else's and function calls are turned into jumps is enough to make the code too complicated to understand. It might be possible to disassemble back to C by resolving the jumps into something that is easier to understand.
Imagine if brains worked the same way. That there is actually a naturally-forming high level "brain language" that is turned by a "brain compiler" function into a low-level "brain assembly," but when we look at it all we see is the assembly. That what the brain is actually doing is relatively simple, but because we can only observe the output of the compiler function it appears to be insanely complex to reverse-engineer.
Then again, I don't have the faintest idea of how brains work either.
I guess technically true, but the cell channels are vastly more complex and much harder to measure. Chemical gradients can pass electric currents, but they can also trigger other chemical cascades and cause physical changes in the cell that may not be reflected when a charge is applied. Logic is also fairly consistent across computer systems, where biological systems can function differently from person to person, and even within the same person at different points in time. There are so many more variables with the living system.
> there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not
First you exclaim there is an underlying logic, then in the next sentence you say we don’t know whether it exists, which completely contradicts your claim.
Interesting to see Levin's zeitgeist spreading (even though considering the amount of podcast and discussions he made explains that too).
I don't know what the biological/medical field thought about single cell and tissue level intelligence before but I found this gap in the usual medical thinking (usually things are either genetic or biochemical/hormonal) quite mind blowing.
Hopefully this results in new opportunities for finer medical therapies.
This is just incredible! I follow Michael Leavin since quite a while now and I am sure that he will earn a Nobel Price for this outstanding research! All the other things that he adresses in his Presentations and also Interviews are just mindblowing!(the one with Lex Fridman is quite in depth, but I prefer others even more)
This really has the potential to revolutionize our understanding of intelligence, mind and medicine.
He may just tell cells to grow a new heart without modifying genes. He want to have what he calls an 'anatomical compiler' which translates our "designs" to electro-magnetic cell stimuli so that they will build this.
For me this is really pointing into a worldview that is much more in line with view that the ancient mystics from all cultures throughout all the ages have been pointing towards:
Intelligence is something fundamental to existance, like space and time (maybe even more fundamental). It is all a play of intelligence, it is phenomenal and it can be tapped into. This is amazing!!!
I've been listening a lot to Sean Caroll's mindscape podcast [0]. In it they have this notion of complex-to-intelligent systems. Their loose definition is that such systems can hold an internal state that represents the world around them. A sort of model to interact with and to extrapolate future events from (time travel!). In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced. One can imagine, somehow in the feedback loops (I think, that she thinks, that I think, that she thinks, ...), something like consciousness (awareness [a model?] of the self in the world?) evolved.
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I'm just a cocktail philosopher, but aren't we all.
But still - why is consciousness required? Because a model of the World could be held even without it, in my view.
E.g., I wouldn't think GPT-4 is conscious, but I'm pretty sure there's a representation of abstract World and relationships within it following the neurons and weights. Otherwise it wouldn't be able to do much of it, that it is.
Also I think model of the World is just that - which can be represented as relationships between neurons, symbolising that model of the World.
And I think you can have a complex and a perfect set of neurons and their connections to represent everything in the most efficient manner for that size of parameters (neurons and connections together). There probably is the perfect configuration, but it couldn't even be achieved using training or evolutionary methods.
I think most of our world model is actually a human model. Our social relationships are more important than we give credit for.
So there's an arms race. The more brains you have the more you can model your tribe to know how to help or succeed. AND the bigger everyone's brain is the harder they are to model simply.
In this model consciousness is the "self model" or "self-consciousness" that allows you to model others opinion of yourself by having such an opinion yourself. And adjust their opinion by providing a narrative about yourself which you first have to craft, .... nd on and on with higher levels of abstractions.
I think some problems are simple enough that they can be dealt with "blindly", but some problems turned out to be tricky in special ways that evolution was more able to solve via consciousness than blind information processing. And from there, we find ourselves, with that new capability in hand, able to repurpose consciousness to newer and newer things. Then retrospectively it can look like consciousness wasn't "needed" for certain problems.
So I think even if you want to make the sense that consciousness solves a lot of problems it doesn't need to, it may have been a "real" solution to a "real" problem at some point in our history. And from that point on, it was no longer important whether it was the best solution.
I do think it's fair to say that lots of remarkably complex informational problems are solved in a p-zombie way, which is to say, with every outward appearance of intelligence (slime molds solving mazes, collective behaviors of ants). So I do think evolution or nature writ large "agrees" with you that consciousness isn't strictly necessary.
1) A huge stream on sensory data only some of which gets promoted to conscious awareness.
2) Some of that raw data and other conscious outputs are persisted into working, short, and long term memory.
3) And your consciousness works recursively using (2) as well as (1) as inputs.
All the stuff in GPT that gets called "memory" in machine learning seems much more like (1) and it lacks any ability to persist data outside its context window so we're still missing something.
Having a purely representative model of the world is less useful than having a sandbox for modeling choices and outcomes. Do I need to duck before entering that doorway?
That introspective analysis is consciousness. Humans have just improved the same mechanism allowing for more abstract analysis.
Personally, I doubt that self-awareness can be achieved without some form of consciousness, and I feel that self-awareness is a key component of higher intelligence.
If intelligence and/or consciousness arise as emergent properties in the right sort of complex system, they will disappear from view in a low-level analysis of the causal processes occurring in that system.
Is there any way you could have a being like a human, who when asked would say they're not conscious? Is a definition of consciousness allowing that possible?
I'm not talking about whether they are or aren't, but surely all
intelligent beings would say and think they're conscious?
A thermostat is a system that can hold an internal state (nominally, temperature) that represents the world around them. You can also build a thermostat with a switch and a bimetallic strip with differing rates of thermal expansion -- a device that is clearly not intelligent. I'm not sure I can subscribe to this definition...
My thermostat may not be intelligent but it is certainly smart... At least it says so on the box.
Anyway, the strip does contain the state of the world around it: the temperature is modeled by how much the bimetal is bent. I think indeed it is a minimal example of a complex system, one that at first glance defies explanation, it seems to have purpose (keep temp stable), until you understand the inside.
Anyway, "Is a virus alive?", "Are these specimens the same species?", ... Us humans like our boxes, but at the edges they almost always go wrong.
In lectures I thermostats as an example of an intelligent system that matches most attempts at defining intelligence. And I have no qualms saying they are intelligent. Intelligence is a very vague and very context dependent thing that can be at most used to compare some things in some scenarios.
It's not just the internal state but the prediction that makes it intelligent.
Your brain is taking in a lot of information at the edges of your awareness, light, sounds, touch, etc. are all getting absorbed and transmitted to your brain. As that information is transmitted along your neurons it's getting summarized, then merged with other summarized information and summarized again. The brain is getting summaries of summaries, and developing a unified categorizing of the global state across all it's inputs.
Then the brain takes that summary and makes a prediction about the future state. The summarization is energy-efficient. By categorizing all that data into a global state you make decision making possible. "When my boss seems stressed all week, then calls a bunch of people one-by-one into his office on Friday afternoon, I know lay-offs are coming. I better polish up my resume." From "stress/anxiety/unease" in the environment to "danger is coming I need to fight/flight".
Your brain is taking that summary/categorization and figuring out what it needs to do next. If "X" happens then I should do "Y" to "stay-safe/get-food/find-a-mate". The brain is very good at capturing and summarizing data, and making a prediction because that process is much more efficient than doing otherwise. Instead of foraging everywhere for food and hoping I just bump into something that will provide sustenance, I know if X, Y, and Z happen then food will be "here", and I can get lots of it.
You can apply this same model to all actions the brain directs. It also helps make sense of why maladaptive behaviors develop. Sometimes the summary is incorrect, or was formed based on past information that no longer applies, and it may need to be unlearned.
You can say that. You can say a lot of things to explain consciousness in a materialistic sense, as in how it could've emerged. But I cannot fathom how material interacting with other material and forces gives arise to subjective experience. It simply makes no sense to me. If I create a copy of my brain, it would be conscious, but with its own unique subjective experience. This makes sense so far, but what exactly is this subjective experience and how can "mere" mechanical matter create such an entity.
So in short: I cannot understand what is the actual substance of subjective experience.
Have you ever been under anesthesia like propofol?
I feel like most of what we call "consciousness" is converting short term memory into selected long-term memory, facilitated by language. Because then you're under, you can even be "interactive" but you're not conscious of it because your short term memory has been disabled.
As to "human intelligence", honestly, I think that human languages that let us convert our "consciousness" into a shared hallucination is the key evolutionary advantage. Human intelligence comprises a hive mind in a sense, that our experience of the world is hugely affected by the shared social experience where language transfers memory from person to person.
> So in short: I cannot understand what is the actual substance of subjective experience.
This problem just goes away if you assume that there is no dividing line between the "experience" of you and the "experience" of any other computational system. Actually try to think, what does a computer "experience"? An atom? What does it feel like to be a standing desk?
- you can represent complex state in a distributed way, so each neuron only encodes a small part of a larger signal
- the system has a working model of the environment, including our value judgements for all states, which are basically our emotions
Such a system can have experience because it has a latent space to encode experience in. It feels like something to be an agent because of the external environment and internal models of the environment, which include imagination and emotions. And this feeling is essential in choosing our actions, so there is a feedback loop action-to-emotion, then emotion-to-action. Our feelings are causal.
What makes sense to me is that consciousness is not an emergent property but a core of all things, with the additional property that is replicative/additive. That is smaller consciousness's can form larger consciousness's.
As to what it is, or why it exists at all, I don't think there will ever be answer to that. It just is.
Its definitely a strange thought, but it seems more likely to me than neurons or whatever other brain matter somehow produce consciousness out of thin air as soon as they some level of composition.
Pure materialism also seems very ill defined to me. The physical world is after all only observable/detectable/can be studied upon, through conscious experience. At best we can say what is real is what is universally agreed upon by all observing conscious agents. If hypothetically there were only two of us, and I said "There is no ball in front of us" and you said "There is", then what is the meaning of physical true/reality?
You can say lets use a detector. But then again, if I experienced the detector as saying false and you said its true, what do we do?
It seems unavoidable that reality is a part of conscious experience, and not the other way around.
If we wrote software to do this, but we were so incompetent that we couldn't fill in the model with correct data, we might just say "who gives a fuck, fill that with random garbage and we'll fix it in a later version". And then we never do.
Your subjective experience is that incompetent model. Your model doesn't know how to correctly judge human character, so you misread people and always wonder why they say one thing but another is true, and it doesn't click that they're lying the whole time. You can't keep track of time because the internal clock just isn't implemented, so the who day seems to drag on, or maybe fly by too quickly.
It's all just really shitty software. Layers upon layers. And because humans believe this to be some mystical thing, rather than trying to fix it from the inside, they assume that it's awesome, necessary, and why would anyone want to fix it?
No small fraction of it is simply because our memory is faulty. The only time you ever remember anything is the first time you remember it, every memory access of that is really you remembering the last time you remembered it. Each access is lossier than the last, and confabulation is guaranteed. This seems to be true even moments after the event.
If it was anyone other than evolution who wrote your code, you could sue them for criminal negligence.
And that's before we even get to the part where you find out you're not even you. Inside your skull is another being, an intelligent one, with desires and goals. But you can't see, hear, or feel this being. It's invisible. The "you" that I'm talking to, exists because this being once upon a time needed to simulate the other humans around him, so he could anticipate them well enough to not be out-competed. He has a pretty good idea what they'd say if he asked them questions, how they'd respond to threats and challenges, what sets them off (so he can avoid fights). And, by mistake or design, he used this simulator to simulate himself (maybe to bootstrap it? if the simulation's output matches his known answers, it's working correctly?).
You're the simulation. When the judge asks the psycho teenager why he put the cat in the microwave and he says "I dunno" he's telling the truth. He does not know why. When your girlfriend cheats on you, and she's crying hysterically and can't tell you why she did it, she's not just lying (either to hurt you or to spare feelings)... she doesn't know. It was that other being in their skulls doing these things. They're just the simulations.
Now, you've either been poking around in your own head, seeing little glimpses of what I'm talking about, making you wonder if I'm not on to something, or you're incapable of that. I've met both kinds of people. If you're the former, you're wondering just how much of it I understand, because some of the glimpses paint a slightly different picture from what I describe. That's because our minds weren't built the same way. No two are alike, not in a special snowflake way, but instead like no two shacks in shantytown have the same kind of leaky cardboard roofs. And, if you're the latter...
I mean. You could write program with a "mind" object that recieves a bunch of data through various sensory "experience". From the perspective of the "mind", the data is "subjective", and the mind is "implemented" in exactly a way that it can represent itself as an entity "I".
I don't think the biological reality is conceptually any more complicated, except that the mind and data are complex in exactly a way that completely hides the abstraction, roughly by being very good at ignoring meaningless artifacts of abstraction.
The hard part isn't imagining such a subjectivity, but imagining that I am that.
We have a deep-founded believe that the atom is the core of reality.
And everything emerges from there.
This materialism stems from René Descartes and his fellow philosophers.
And in the West it's often subconsciously combined it with evolutional theory. consciousness developed because it was useful somehow. However that's a very big leap to make.
Both theories have good arguments going for them but are very theoretical and need a lot more proof. Yet they form the basis for pretty much all Western thought
From a scientific perspective we have no idea how to create new consciousness or what it is.
From a human's experience it's more the other way around, reality is an emerging property of consciousness.
At the same time we also learned that matter & time is not as solid as we thought a few centuries ago.
In the brain there is an emergent reflection of a material reality happening where the brain is creating a fully constructed model of the world with its own independent existence, our day to day experience is a dream that's cohering to sense input. Whether or not that is what consciousness or our apparent point of view lives I don't know because I don't see how to logically prove it either way, but experimentally it seems like it does because our experiences align, and because you can alter people's state of consciousness through chemical and physical means.
> Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I'm not even a cocktail biologist, but my understanding is cells effectively operate via a web of complex chemical reactions, so the notion of a cell holding primitive models might be analogous to the way a CPU executes an assembly instruction: not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU).
In a similar way, "anticipating events" could involve an analogue to computer memory: the processes that have run so far have lead to certain state being saved to memory that will now influence how the system reacts to stimuli in a way that's different from how it reacted before (e.g. sum a value with the value stored in a register).
> not because it “thinks” but because the way it’s wired it’s inevitable that it will react to a human stimulus in a predefined way
This CPU analogy of yours doesn’t comport very well with the article we’re commenting on, which detailed some specific experiments that show cells are not reacting in a predefined way that is due to their ‘wiring’, contrary to previous and maybe incomplete understanding of how cells work. I don’t know if the RAM analogy helps since the surprise is that non-brain cells do have memory and do cooperate with other non-brain cells to solve certain problems, and these collections of non-brain cells can apparently remember solutions to problems over time. So yes, memory can help with anticipating events, but that really supports the idea that cells are dynamic and doing some non-trivial processing vs the possibly outdated notion that they’re hard-wired and deterministic.
> not because it "thinks" but because the way it's wired
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
― Edsger W. Dijkstra
If we are talking about which kind of complex systems (our brain, a cell, a computer, an LLM, a beehive, etc.) think and how we should note that there is nothing magical[0] in our brain that makes our thinking special and so other blobs of atoms that are not our brain can likely do things analogous to what our brain does.
This to say that explaining in reductionist terms how supposedly something thinks is not a proof that it is not really thinking. Otherwise a sufficiently intelligent alien could prove that you are not really thinking (just a bunch of ions dancing)
[0] and if there is something magical then we do understand how it works and where else it is magicing stuff.
CPUs are anticipating all the time how the future will evolve. They have caches (to be specific, expiration strategies), branch predictors, and speculative execution. Albeit for a very different purpose: to enhance processing speed, not to react to external events.
> not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU)
I think these are likely different only by way of their level of complexity. We simply substitute a word like "think" when the reactions to stimuli are far too complex and numerous for us to track fully. But ultimately said "thinking" is made up to many, many cells following those same stimulus/reaction patterns.
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I've occasionally run into science podcasts, going back almost a decade, where some researcher talks about the computational power of cell membranes, and how the synapses evolved from these mechanisms. Amoebas and paramecia navigate their environments, sense, and react through their cell membranes. Apparently, synapses evolved from these mechanisms.
The upshot of this for AI, is that the neural network model may be drastically incomplete, with far more computation actually happening inside actual individual neurons.
Nobody is attempting to have one-to-one correspondence between neurons and artificial "neurons", the fact that a single biological does much more doesn't imply some limitation or incompleteness (as long as the same computations can be implemented simply by having more of them, and as far as we understand, that seems to be the case) - the choice is primary because due to how our hardware parallelization works, we'd prefer to implement the exact same behavior with 1000 structurally identical simple "neurons" rather than have a single more complex "emulated neuron" that requires more complicated logic that can't be straightforwardly reduced to massive matrix multiplication.
I'm also a cocktail philosopher, but isn't consciousness different to just having a model of the world and self within it? Consciousness is the lived experience. The world model and feeling of self appear in consciousness. I think a complex system could plausibly be conscious without having a belief of a self within it. Not sure if consciousness is possible without any world model though.
My impressions about this were strongly influenced by Sam Harris's Waking Up book and app.
One possibility at least is that "the experience" is not something that really happens. That is, it's possible that we don't actually "feel" anything, and our impression that we are is just the story that our self-model comes up to explain (and help predict) our reactions to ourselves. Just like our world model has ideas like "the rock wants to fall down", it's possible that our self-model does too.
We already know that our self-model can be entirely wrong about our feelings. People with paralyzed or even missing limbs often believe that they just don't want to move that limb at first. So, they think they are having one experience, but they are wrong about their own internal experience: in fact, they are not moving that limb because they can't. And there are many other similar examples of people being wrong about their own intenal experiences, typically but not exclusively because of some illness.
So, it's possible that our internal experiences are in fact only a model in which one part of our brain interprets the actions of other parts of our brain, often retroactively.
Note: I'm not claiming this is the truth or silly things like "if you believe in science you have to believe this". It's just another cocktail philosopher's story of what consciousness might be. Other stories are just as plausible, and just as consistent with the little we do know in this area.
I agree that "consciousness is different to just having a model of the world and self within it" indeed. I'm just saying it feels like that modelling ability (which has clear and major evolutionary advantages) is a step towards consciousness, indeed something in the now (as we experience it). A (near) real-time model perhaps that constantly projects and adjusts. I guess this still doesn't require consciousness, but maybe consciousness results from this? Does it require a sense of "now" and identity relative to the world model?
This is one of Hofstadter’s big ideas that he explored in his main work: GEB, Mind’s I, and I am a Strange Loop. The latter is a good intro to his work.
The particular podcast didn’t come across with that link. Can you provide the title or number? I’d like to listen to it! I reviewed a fair amount the podcast list, but didn’t find a match to your description.
Joscha Bach also talks about this a lot. He calls the consciousness the monkey with a stick controlling the elephant. For a starting point, listen to his Lex Fridman interviews.
> A sort of model to interact with and to extrapolate future events from
Something something LLMs can only predict the next word.
I hate to spin up this trendy debate again, but it's always funny to me to see the dissonance when talking about the exact same things in biological and mathematical cases.
LLMs don't even come close to the complexity of the human mind though. They're a pastiche of human language, a fuzzy jpeg of the Internet.
The human mind is _so much more_ than a prediction machine, and incredibly complex... All that's before you get into the way the endocrine system interacts with your mind.
A single neuron has an average of 250000 connections in some parts of the brain. The speed at which neuronal signals travel varies neuron to neuron from 2.5m/s to 200m/s.
Human minds are more than just prediction. The anterior lateral prefrontal cortex has the sole responsibility of prediction (not that nothing else does, just that the ALPC seems solely dedicated to that task) and is extremely good at it. Prediction can influence all sorts of mental processes such as most all forms of perception... But it is _not_ the same as _being_ all forms of perception. If something unpredictable enough happens in front of you: you'll still see it.
Sure there are limits to that: when focused on a task the predictive parts of sight tend to filter out visual data that doesn't match the signal you're looking for (see: basketball players passing the ball and a moon walking man in an ape suit) but if every basketball player turned into Spaghetti-os and started screaming you'd still hear the sounds and see the O's.
So sure: LLMs do a good job at basic prediction but they're nowhere near the complexity of the human mind, of which prediction is only a small piece.
(And let's not even get into efficiency... A brain runs on 20W of power)
If human brains have a model, then is language the transport layer on top of that? Is trying to get to intelligence via language no better than trying to get to "google" by modeling its TCP/IP traffic?
Man why do all people working the most menial tech jobs have such an obsession suggesting some shitty "research" fad in CS as a solution centuries-old complex problems in all other science fields? It's cringe, reeks ignorance and the comparisons are flat out wrong most of the time.
It's especially worse when low-quality popular science journalism promotes this notion, like this Quanta article about the human vision system working just like transformers do.
> In this light consciousness also makes more sense to me, although consciousness feels more like a by-product, our (human) ability to hold an internal model of the world in our minds and interact with it, is pretty advanced.
You can generate all kind of sentences like this all day you want in your consciousness. That does not make it any true.
There is zero evidence for existence of physical matter/materialism.
The only thing we know for sure that exists is consciousness.
And you suggest the complete opposite with zero evidence.
There is also zero "evidence", by this extremely restrictive standard of "evidence", for existence of any consciousness aside one's own. This rhetorical strategy thus has a weakness: who or what exactly are you trying to convince?
Brains are not required to solve problems, yes, but they are required to think. That's one of their defining characteristics. It's not a thought without something like a brain, at best it's a pre-programmed/pre-trained behavioural response.
Let me humbly suggest to you to not make such (Truth) statements!
I dont know of any hard evidence that supports this. I know this is what most people believe, but the focus is on believe.
That’s misunderstanding what they’re saying. If you watch some of Michael Levin’s talks on YouTube, he specifically uses William James’ definition of intelligence (Intelligence is a fixed goal with variable means of achieving it) and has experimentally shown this capability at cellular scales. He shows how it cannot be pre-programmed behavior. There seems to be goal directed behavior.
> (Intelligence is a fixed goal with variable means of achieving it) and has experimentally shown this capability at cellular scales.
Supposing I accept that, what does this have to do with thought, which is the claim that I was specifically responding to? Does Levin or James also show that this can only be done by having thoughts?
Edit: for instance, as opposed to having some non-thinking process like gradient descent, or more plausibly, some kind of hill climbing.
Which is one of the arguments the ancient Greeks (Aristotle in particular) used to argue that God must exist. Things are clearly ordered to ends (have goal-directed behavior). Others came to the conclusion that all things that are are part of one enormous goal-directed-entity, but that conclusion involves a bootstrapping problem on the part of that entity (which is composed of parts) and so I don't hold with it.
This is pretty similar to concept in "Children of Time" Adrian Tchaikovsky.
I've always thought the concept in the book of 'DNA' memory storage, was SCI-FI. Cool concept, but really far out. So this is pretty exciting that this Sci-Fi concept could happen.
What if we could drink something to give us the memories of someone else. And this would be way to drink a 'degree', and learn a ton fast.
"Glanzman was able to transfer a memory of an electric shock from one sea slug to another by extracting RNA from the brains of shocked slugs and injecting it into the brains of new slugs. The recipients then “remembered” to recoil from the touch that preceded the shock. If RNA can be a medium of memory storage, any cell might have the ability, not just neurons."
> “Indeed, the very act of living is by default a cognitive state, Lyon says. Every cell needs to be constantly evaluating its surroundings, making decisions about what to let in and what to keep out and planning its next steps. Cognition didn't arrive later in evolution. It's what made life possible.“
Yes. Cognition isn’t just about solving differential equations and the like. It also refers to the most basic functions/processes such as perception and evaluation.
1. "Bioelectricity"
This is a generic term which doesn't capture the nuance of charge gradients and chemical gradients in cells. While you can directly apply charges to interact with gradient based biological systems, this is a brute force method. Cells have chemically selective walls. So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple.
2. Replacement
This one is a bit more subtle. If you find that you can affect a system by one means that is not the same thing as saying the means is the cause. Take the example of using RNA to transfer memory from one Aplysia to another. Immediately after transfer the recipient does not have the memory. It takes time for the introduced RNA to affect sensory cells so that they become more sensitive to stimulation. This is in contrast to a trained animal that has already undergone synaptic remodeling. If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.
In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions. Because of this complexity you can never say 'it's really about X', the best you can say is 'X plays a major role' or 'X contributes Y percent to this observed phenomenon'.
> If you have the appropriate synapses but were somehow able to remove all the relevant RNA in an instant, the animal would continue to 'remember' its training. Synapses are sufficient.
I'm not sure these two statements are compatible. The first is definitely true, and rna does function on a slower timescale. We can't be 100% confident that some of the complexity we don't understand in the first statement wouldn't have an impact in the second scenario, can we?
I appreciate that epigenetics is a well-established field at this point but I worry people conflate its effects with other phenomena.
Having said that, are you familiar with the purkinje cell from a rabbit that they trained to respond to timed patterns of input in isolation?
Timed pattern=input 1, delay X, input 2, delay Y, then input 3.
Definitely more than a simple on/off switch type training, but does that rise to the level of "memory"?
Not if you removed the DNA. Epigenetic changes to the DNA are what maintain the synapse at it's "learned" state. Here's a link:
https://www.sciencedirect.com/science/article/pii/S240584402...
In addition, research has shown neurons communicating via mRNA (surrounded by a lipid).
https://www.nature.com/articles/d41586-018-00492-w
https://www.inverse.com/article/40113-arc-protein-ancient-mo...
Lots of interesting stuff in this arena.
Or was all this replicated? What strengths and weaknesses did they hypothesize in these studies? What did they prove or disprove? What’s the next steps? And can we already implement any of those in simulators?
(Note: I think agents poking and prodding the world can definitely be implemented in simulators. Even primitive, game engines should be able to model some of that.)
You can say the same thing about computer systems - as long as you don't understand the underlying logic. If you don't understand that the chemistry of transistors doesn't matter as much as the C code, you can say exactly the same critique about how a thinkpad works: "So while applying an external electrical voltage can act in a similar manner as causing a neuron to fire, it is far less precise than the calcium and sodium channel mediated depolarization which implements normal firing. Said another way 'bioelectricity' is not simple....In reality there are multiple systems that work together over multiple timescales to produce the behaviors we observe. Some of those systems can have their contributions mimicked by other interventions."
Once you do understand the logic - the 'why' of von neumann machines and Javascript and transistors, it's clear that your claim isn't true and there is an underlying logic. The trouble is, until we positively identify that logic, we can't know if it exists or not and we're stuck debating the bioequivalent of the fundamental computational significance of the clock cycle speed of a CPU.
We use programming languages like C to create complex branching algorithms that are turned a linear machine code tape. Programmers generally can not understand assembly even if they understand the branching code that is turned into assembly. Even if assembly had variables, just the fact that if/else's and function calls are turned into jumps is enough to make the code too complicated to understand. It might be possible to disassemble back to C by resolving the jumps into something that is easier to understand.
Imagine if brains worked the same way. That there is actually a naturally-forming high level "brain language" that is turned by a "brain compiler" function into a low-level "brain assembly," but when we look at it all we see is the assembly. That what the brain is actually doing is relatively simple, but because we can only observe the output of the compiler function it appears to be insanely complex to reverse-engineer.
Then again, I don't have the faintest idea of how brains work either.
First you exclaim there is an underlying logic, then in the next sentence you say we don’t know whether it exists, which completely contradicts your claim.
I don't know what the biological/medical field thought about single cell and tissue level intelligence before but I found this gap in the usual medical thinking (usually things are either genetic or biochemical/hormonal) quite mind blowing.
Hopefully this results in new opportunities for finer medical therapies.
This really has the potential to revolutionize our understanding of intelligence, mind and medicine. He may just tell cells to grow a new heart without modifying genes. He want to have what he calls an 'anatomical compiler' which translates our "designs" to electro-magnetic cell stimuli so that they will build this.
For me this is really pointing into a worldview that is much more in line with view that the ancient mystics from all cultures throughout all the ages have been pointing towards: Intelligence is something fundamental to existance, like space and time (maybe even more fundamental). It is all a play of intelligence, it is phenomenal and it can be tapped into. This is amazing!!!
Anyway, cells can hold (super) primitive models of the world and maintain internal balance in the face of anticipated events.
I'm just a cocktail philosopher, but aren't we all.
[0] https://podverse.fm/podcast/e42yV38oN
E.g., I wouldn't think GPT-4 is conscious, but I'm pretty sure there's a representation of abstract World and relationships within it following the neurons and weights. Otherwise it wouldn't be able to do much of it, that it is.
Also I think model of the World is just that - which can be represented as relationships between neurons, symbolising that model of the World.
And I think you can have a complex and a perfect set of neurons and their connections to represent everything in the most efficient manner for that size of parameters (neurons and connections together). There probably is the perfect configuration, but it couldn't even be achieved using training or evolutionary methods.
And none of it requires consciousness in my view.
So there's an arms race. The more brains you have the more you can model your tribe to know how to help or succeed. AND the bigger everyone's brain is the harder they are to model simply.
In this model consciousness is the "self model" or "self-consciousness" that allows you to model others opinion of yourself by having such an opinion yourself. And adjust their opinion by providing a narrative about yourself which you first have to craft, .... nd on and on with higher levels of abstractions.
So I think even if you want to make the sense that consciousness solves a lot of problems it doesn't need to, it may have been a "real" solution to a "real" problem at some point in our history. And from that point on, it was no longer important whether it was the best solution.
I do think it's fair to say that lots of remarkably complex informational problems are solved in a p-zombie way, which is to say, with every outward appearance of intelligence (slime molds solving mazes, collective behaviors of ants). So I do think evolution or nature writ large "agrees" with you that consciousness isn't strictly necessary.
1) A huge stream on sensory data only some of which gets promoted to conscious awareness.
2) Some of that raw data and other conscious outputs are persisted into working, short, and long term memory.
3) And your consciousness works recursively using (2) as well as (1) as inputs.
All the stuff in GPT that gets called "memory" in machine learning seems much more like (1) and it lacks any ability to persist data outside its context window so we're still missing something.
That introspective analysis is consciousness. Humans have just improved the same mechanism allowing for more abstract analysis.
If intelligence and/or consciousness arise as emergent properties in the right sort of complex system, they will disappear from view in a low-level analysis of the causal processes occurring in that system.
I'm not talking about whether they are or aren't, but surely all intelligent beings would say and think they're conscious?
Did someone say it is? Parent explicitly called it out as a by-product.
Deleted Comment
Anyway, the strip does contain the state of the world around it: the temperature is modeled by how much the bimetal is bent. I think indeed it is a minimal example of a complex system, one that at first glance defies explanation, it seems to have purpose (keep temp stable), until you understand the inside.
Anyway, "Is a virus alive?", "Are these specimens the same species?", ... Us humans like our boxes, but at the edges they almost always go wrong.
Is this because it is a completely man-made system and not one that evolved slowly over time through natural processes?
Your brain is taking in a lot of information at the edges of your awareness, light, sounds, touch, etc. are all getting absorbed and transmitted to your brain. As that information is transmitted along your neurons it's getting summarized, then merged with other summarized information and summarized again. The brain is getting summaries of summaries, and developing a unified categorizing of the global state across all it's inputs.
Then the brain takes that summary and makes a prediction about the future state. The summarization is energy-efficient. By categorizing all that data into a global state you make decision making possible. "When my boss seems stressed all week, then calls a bunch of people one-by-one into his office on Friday afternoon, I know lay-offs are coming. I better polish up my resume." From "stress/anxiety/unease" in the environment to "danger is coming I need to fight/flight".
Your brain is taking that summary/categorization and figuring out what it needs to do next. If "X" happens then I should do "Y" to "stay-safe/get-food/find-a-mate". The brain is very good at capturing and summarizing data, and making a prediction because that process is much more efficient than doing otherwise. Instead of foraging everywhere for food and hoping I just bump into something that will provide sustenance, I know if X, Y, and Z happen then food will be "here", and I can get lots of it.
You can apply this same model to all actions the brain directs. It also helps make sense of why maladaptive behaviors develop. Sometimes the summary is incorrect, or was formed based on past information that no longer applies, and it may need to be unlearned.
The definition of intelligent I give is “to mitigate uncertainty.” If it does not mitigate uncertainty, it is not intelligent.
It is merely of constrained intelligence. Perhaps your expectations are too broad.
If the thermostat reacts appropriately to environmental changes then it is performing its role intelligently.
Dead Comment
So in short: I cannot understand what is the actual substance of subjective experience.
I feel like most of what we call "consciousness" is converting short term memory into selected long-term memory, facilitated by language. Because then you're under, you can even be "interactive" but you're not conscious of it because your short term memory has been disabled.
As to "human intelligence", honestly, I think that human languages that let us convert our "consciousness" into a shared hallucination is the key evolutionary advantage. Human intelligence comprises a hive mind in a sense, that our experience of the world is hugely affected by the shared social experience where language transfers memory from person to person.
This problem just goes away if you assume that there is no dividing line between the "experience" of you and the "experience" of any other computational system. Actually try to think, what does a computer "experience"? An atom? What does it feel like to be a standing desk?
- you can represent complex state in a distributed way, so each neuron only encodes a small part of a larger signal
- the system has a working model of the environment, including our value judgements for all states, which are basically our emotions
Such a system can have experience because it has a latent space to encode experience in. It feels like something to be an agent because of the external environment and internal models of the environment, which include imagination and emotions. And this feeling is essential in choosing our actions, so there is a feedback loop action-to-emotion, then emotion-to-action. Our feelings are causal.
As to what it is, or why it exists at all, I don't think there will ever be answer to that. It just is.
Its definitely a strange thought, but it seems more likely to me than neurons or whatever other brain matter somehow produce consciousness out of thin air as soon as they some level of composition.
Pure materialism also seems very ill defined to me. The physical world is after all only observable/detectable/can be studied upon, through conscious experience. At best we can say what is real is what is universally agreed upon by all observing conscious agents. If hypothetically there were only two of us, and I said "There is no ball in front of us" and you said "There is", then what is the meaning of physical true/reality?
You can say lets use a detector. But then again, if I experienced the detector as saying false and you said its true, what do we do?
It seems unavoidable that reality is a part of conscious experience, and not the other way around.
Your subjective experience is that incompetent model. Your model doesn't know how to correctly judge human character, so you misread people and always wonder why they say one thing but another is true, and it doesn't click that they're lying the whole time. You can't keep track of time because the internal clock just isn't implemented, so the who day seems to drag on, or maybe fly by too quickly.
It's all just really shitty software. Layers upon layers. And because humans believe this to be some mystical thing, rather than trying to fix it from the inside, they assume that it's awesome, necessary, and why would anyone want to fix it?
No small fraction of it is simply because our memory is faulty. The only time you ever remember anything is the first time you remember it, every memory access of that is really you remembering the last time you remembered it. Each access is lossier than the last, and confabulation is guaranteed. This seems to be true even moments after the event.
If it was anyone other than evolution who wrote your code, you could sue them for criminal negligence.
And that's before we even get to the part where you find out you're not even you. Inside your skull is another being, an intelligent one, with desires and goals. But you can't see, hear, or feel this being. It's invisible. The "you" that I'm talking to, exists because this being once upon a time needed to simulate the other humans around him, so he could anticipate them well enough to not be out-competed. He has a pretty good idea what they'd say if he asked them questions, how they'd respond to threats and challenges, what sets them off (so he can avoid fights). And, by mistake or design, he used this simulator to simulate himself (maybe to bootstrap it? if the simulation's output matches his known answers, it's working correctly?).
You're the simulation. When the judge asks the psycho teenager why he put the cat in the microwave and he says "I dunno" he's telling the truth. He does not know why. When your girlfriend cheats on you, and she's crying hysterically and can't tell you why she did it, she's not just lying (either to hurt you or to spare feelings)... she doesn't know. It was that other being in their skulls doing these things. They're just the simulations.
Now, you've either been poking around in your own head, seeing little glimpses of what I'm talking about, making you wonder if I'm not on to something, or you're incapable of that. I've met both kinds of people. If you're the former, you're wondering just how much of it I understand, because some of the glimpses paint a slightly different picture from what I describe. That's because our minds weren't built the same way. No two are alike, not in a special snowflake way, but instead like no two shacks in shantytown have the same kind of leaky cardboard roofs. And, if you're the latter...
I don't think the biological reality is conceptually any more complicated, except that the mind and data are complex in exactly a way that completely hides the abstraction, roughly by being very good at ignoring meaningless artifacts of abstraction.
The hard part isn't imagining such a subjectivity, but imagining that I am that.
And everything emerges from there.
This materialism stems from René Descartes and his fellow philosophers.
And in the West it's often subconsciously combined it with evolutional theory. consciousness developed because it was useful somehow. However that's a very big leap to make.
Both theories have good arguments going for them but are very theoretical and need a lot more proof. Yet they form the basis for pretty much all Western thought
From a scientific perspective we have no idea how to create new consciousness or what it is.
From a human's experience it's more the other way around, reality is an emerging property of consciousness.
At the same time we also learned that matter & time is not as solid as we thought a few centuries ago.
I'm not even a cocktail biologist, but my understanding is cells effectively operate via a web of complex chemical reactions, so the notion of a cell holding primitive models might be analogous to the way a CPU executes an assembly instruction: not because it "thinks" but because the way it's wired it's (nearly - barring solar radiation, I suppose, which incidentally also goes for cells) inevitable that it will react to a stimulus in a predefined way (even though the way cells react to stimuli is far more advanced than a CPU).
In a similar way, "anticipating events" could involve an analogue to computer memory: the processes that have run so far have lead to certain state being saved to memory that will now influence how the system reacts to stimuli in a way that's different from how it reacted before (e.g. sum a value with the value stored in a register).
This CPU analogy of yours doesn’t comport very well with the article we’re commenting on, which detailed some specific experiments that show cells are not reacting in a predefined way that is due to their ‘wiring’, contrary to previous and maybe incomplete understanding of how cells work. I don’t know if the RAM analogy helps since the surprise is that non-brain cells do have memory and do cooperate with other non-brain cells to solve certain problems, and these collections of non-brain cells can apparently remember solutions to problems over time. So yes, memory can help with anticipating events, but that really supports the idea that cells are dynamic and doing some non-trivial processing vs the possibly outdated notion that they’re hard-wired and deterministic.
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” ― Edsger W. Dijkstra
If we are talking about which kind of complex systems (our brain, a cell, a computer, an LLM, a beehive, etc.) think and how we should note that there is nothing magical[0] in our brain that makes our thinking special and so other blobs of atoms that are not our brain can likely do things analogous to what our brain does.
This to say that explaining in reductionist terms how supposedly something thinks is not a proof that it is not really thinking. Otherwise a sufficiently intelligent alien could prove that you are not really thinking (just a bunch of ions dancing)
[0] and if there is something magical then we do understand how it works and where else it is magicing stuff.
I think these are likely different only by way of their level of complexity. We simply substitute a word like "think" when the reactions to stimuli are far too complex and numerous for us to track fully. But ultimately said "thinking" is made up to many, many cells following those same stimulus/reaction patterns.
What if it is wired to think?
I've occasionally run into science podcasts, going back almost a decade, where some researcher talks about the computational power of cell membranes, and how the synapses evolved from these mechanisms. Amoebas and paramecia navigate their environments, sense, and react through their cell membranes. Apparently, synapses evolved from these mechanisms.
The upshot of this for AI, is that the neural network model may be drastically incomplete, with far more computation actually happening inside actual individual neurons.
My impressions about this were strongly influenced by Sam Harris's Waking Up book and app.
We already know that our self-model can be entirely wrong about our feelings. People with paralyzed or even missing limbs often believe that they just don't want to move that limb at first. So, they think they are having one experience, but they are wrong about their own internal experience: in fact, they are not moving that limb because they can't. And there are many other similar examples of people being wrong about their own intenal experiences, typically but not exclusively because of some illness.
So, it's possible that our internal experiences are in fact only a model in which one part of our brain interprets the actions of other parts of our brain, often retroactively.
Note: I'm not claiming this is the truth or silly things like "if you believe in science you have to believe this". It's just another cocktail philosopher's story of what consciousness might be. Other stories are just as plausible, and just as consistent with the little we do know in this area.
I feel like the matrix is about the eject me btw.
Thanx, I'm looking for Harris' books right now.
I think everyone should ponder this, when thinking about how they think, like as if they are the one thinking at all.
"Man can do what he wills but he cannot will what he wills.” ― Arthur Schopenhauer, Essays and Aphorisms
Deleted Comment
Something something LLMs can only predict the next word.
I hate to spin up this trendy debate again, but it's always funny to me to see the dissonance when talking about the exact same things in biological and mathematical cases.
The human mind is _so much more_ than a prediction machine, and incredibly complex... All that's before you get into the way the endocrine system interacts with your mind.
A single neuron has an average of 250000 connections in some parts of the brain. The speed at which neuronal signals travel varies neuron to neuron from 2.5m/s to 200m/s.
Human minds are more than just prediction. The anterior lateral prefrontal cortex has the sole responsibility of prediction (not that nothing else does, just that the ALPC seems solely dedicated to that task) and is extremely good at it. Prediction can influence all sorts of mental processes such as most all forms of perception... But it is _not_ the same as _being_ all forms of perception. If something unpredictable enough happens in front of you: you'll still see it.
Sure there are limits to that: when focused on a task the predictive parts of sight tend to filter out visual data that doesn't match the signal you're looking for (see: basketball players passing the ball and a moon walking man in an ape suit) but if every basketball player turned into Spaghetti-os and started screaming you'd still hear the sounds and see the O's.
So sure: LLMs do a good job at basic prediction but they're nowhere near the complexity of the human mind, of which prediction is only a small piece.
(And let's not even get into efficiency... A brain runs on 20W of power)
This the right term to use here.
> Something something
If human brains have a model, then is language the transport layer on top of that? Is trying to get to intelligence via language no better than trying to get to "google" by modeling its TCP/IP traffic?
It's especially worse when low-quality popular science journalism promotes this notion, like this Quanta article about the human vision system working just like transformers do.
You can generate all kind of sentences like this all day you want in your consciousness. That does not make it any true.
There is zero evidence for existence of physical matter/materialism.
The only thing we know for sure that exists is consciousness.
And you suggest the complete opposite with zero evidence.
Let me humbly suggest to you to not make such (Truth) statements! I dont know of any hard evidence that supports this. I know this is what most people believe, but the focus is on believe.
Supposing I accept that, what does this have to do with thought, which is the claim that I was specifically responding to? Does Levin or James also show that this can only be done by having thoughts?
Edit: for instance, as opposed to having some non-thinking process like gradient descent, or more plausibly, some kind of hill climbing.
I've always thought the concept in the book of 'DNA' memory storage, was SCI-FI. Cool concept, but really far out. So this is pretty exciting that this Sci-Fi concept could happen.
What if we could drink something to give us the memories of someone else. And this would be way to drink a 'degree', and learn a ton fast.
"Glanzman was able to transfer a memory of an electric shock from one sea slug to another by extracting RNA from the brains of shocked slugs and injecting it into the brains of new slugs. The recipients then “remembered” to recoil from the touch that preceded the shock. If RNA can be a medium of memory storage, any cell might have the ability, not just neurons."
Yes. Cognition isn’t just about solving differential equations and the like. It also refers to the most basic functions/processes such as perception and evaluation.