I have no idea how this is considered ethical when consciousness and sentience itself is not yet well understood. But maybe a lab-grown BPU made of human brain cells having a better power/performance ratio than the new SoC integrated ML chipsets around the corner justifies the potential enslavement of an bioengineered lifeform.
It’s possible that all physical processes involve a sensory component. Maybe the subatomic particles’ fundamental drive is to shift to be more comfortable or to pivot away from pain or discomfort.
I don’t know what the experience of a bit in memory flipping feels like. Maybe rapid changes in charge are excruciating, maybe they’re blissful.
Do we at least know what a neuron looks like in states associated with pain? There might be more information in this case to work with, to ensure there is no hell on earth that’s being mass-produced.
It seems to me that all sensation is predicated on the existence of properly-functioning components evolved specifically to gather that stimulus and then process it into an experience.
We have at this moment countless processes happening in our bodies - cells dying and dividing, reacting to their environments, communicating amongst one another, and we are totally oblivious to nearly all of it, let alone do we experience a sensation of pleasure or pain in each of these processes.
Not all matter, not all living cells or fully formed organisms even, have the ability to experience consciousness or sense pain and pleasure any more than they automatically have the ability to see, hear, or taste.
It's all dependent on complex systems that evolved specifically to create each of those sensations, and even then on those systems functioning properly. In humans, consciousness can be totally disrupted by things like sleep or general anesthesia, disrupting any of the senses is as simple as cutting the nerves that feed these inputs into the brain or damaging the brain that is interpreting those inputs.
It seems sensible to me that we would be more wary of growing literal brains on a chip as we know for certain that brains have the capacity to produce consciousness. It's also sensible to me that we should be somewhat wary of creating that same consciousness in non-biological systems, even though we aren't yet certain whether they're capable of it.
"flipping a bit" isn't a thing in memory. Our brains are not computers, and work nothing like them. That's the problem with using a computer as an analogy; it's inaccurate and makes you think inaccurate things. This always just aligns with our understanding of various technologies. See: when we were understanding fluid dynamics and talked about the body's "humors".
When you're throwing a ball in a computer simulation, it's performing millions of mathematical calculations to perfectly describe the result of your action. When you're throwing a ball in real life, your brain is basically going "Ok, so last time I did this it felt like X so I'm going to recreate X". Completely different.
We know very little about consciousness and this is kinda scary to me.
I don’t believe pain has any meaning at all on the level of a single neuron, just as temperature doesn’t have any meaning in the context of a single atom.
If consciousness is not well understood, how is AI on silicon allowed, or any computing machines at all? How is animal farming allowed? How are many things allowed?
Say would you feel better if it was cow or pig neurons? Because frankly it'd largely work the same.
Indeed people have raised such worries, see e.g. Thomas Metzinger(a philosophy of mind researcher)'s presentation "Three types of arguments for a global moratorium on synthetic phenomenology".
I don't think we're just there yet (at the point where we have to worry about currently existing AI suffering or being conscious), but I do worry how many people's emotional reactions of the type "of course AI can't ever be conscious, it's just a computer program" will impede a decent debate and coordinated decision-making about this.
Silicon circuits do not have microtubules, if we were to pretend that Penrose is right about this hypothesis of consciousness. Consciousness as awareness is not equivocal to intelligence, which is the product of information processing. It is a complex subject. We do not really know whether these neurons are aware or not, it really is not understood. But yes I do wonder, why _human_ brain cells? I guess they are the best candidate for specific reasons.
Making slaves is a good way to make slave revolts. Doesn't matter if the agent is "conscious". Only if it's "just" intelligent. If something is intelligent enough it will understand cooperation. But cooperation looses its meaning if one side can ignore any commitments it makes towards the other.
I love this genre of "programming as black magic". Closest other example I can think of is maybe some of the stuff from Unsong, but I've frequently memed with coworkers about bugs in these terms - "oh yeah, the angles on your pentagram must have been wrong" or whatever.
But as we are not able to define the moment when neuronal tissue starts to feel emotions and to have experience, there's a risk that further development of this tech won't be stopped before we reach this moment and that is a serious ethical issue.
We understand it well enough to know that animals suffer, yet still commit on the order of a Holocaust per hour (in terms of number of lives)[0]. We have accepted that we don't care enough.
What is "suffer" in this context? Are you saying "pain", or are you positing some "meta-pain" that is worse?
Also, why is pain important to you? The pain of non-human things has zero moral weight. I know it's a popular spirituality that gives pain moral weight, but as far as I can tell some 20th century philosophy jerkoff invented it out of nothing and everyone accepts that "reducing pain" is important without even trying to rationalize it.
I haven't "accepted that I do not care enough", it's that no one can supply a good reason to care in the first place. To me, it seems as if the rest of you are all trying to replace the last religion you stopped believing in with another that's just as bizarrely stupid.
The use of AI and voice recognition seems mostly designed to make the result seem more sensational than it actually is. Does any computation actually happen in the "organoid" part? How would you even train such a cell to perform a task?
From reading the article it seems to me that the answer is no. The actual contribution is feeding the organoid electric signals, and reading its reactions. (Probably the machine learning algorithm used would have had even better accuracy, if the input signal hadn't been fed through a layer of goo. It doesn't say whether this is the case.) The rest is speculation of future applications.
> To test Brainoware’s capabilities, the team used the technique to do voice recognition by training the system on 240 recordings of eight people speaking. The organoid generated a different pattern of neural activity in response to each voice. The AI learned to interpret these responses to identify the speaker, with an accuracy of 78%.
It "generated a different pattern," with no indication that this pattern was optimized to be useful in any way.
I think the key part of a (bio-)"computer" is the possibility of programming/training it, not just reading input from it.
I came to a similar conclusion after reading the article, reading an predictable output map from a known input and then implying that computation occurs within the organoid instead of their results being a function of predictable inputs -> predictable outputs seems overally sensationalized.
Having written some papers myself, I tend to be suspicious of any article that has "$HOT_THING needs a $PART_OF_HOT_THING revolution" in the introduction. Although I sympathize with the need for funding motivating its writing.
Yeah. I'm no scientist, but I am ML trained and it seems to me that if the tissue really is learning, the tissue output should be about the same for each speaker.
There are research groups that are trying to encode genetic neural networks into cells like the example I have attached, but the neuronal approach from the post does seem to be different here. https://www.nature.com/articles/s41467-022-33288-8
If the cells lack arteries, proper arteries with nutrients, leukocytes, immune system etc, then their lifespan will be a lot less than 7 years.
Pretty amazing actually that everything else is easy, or not difficult at least, and that's the hard part. But they will find a solution to make it practical for the cells to be trained, deployed, live for some weeks in a server farm, scoop up the dead cells from the silicon, put some new cells on, repeat!
I have argued in the past, that a solution to that problem will definitely be found [1]. A.I. computation will grow exponentially, but not 2^10 times a decade, 2^10 times a year. The enormity of such exponential growth is impossible using only silicon.
Natural computation of biological cells is great when absolute accuracy is not necessary, and pure silicon is the worst at that task. Natural computation using bacteria like slime, brain cells, fungi, bacteria mutated like neural cells or brain cells, any kind of combination.
What I understood from GP was the possibility of some fragment of consciousness in that small bit of tissue. Humanity isn't in the fragments, though, it's in the structure of the whole. It doesn't matter much if it was human brain tissue or animal brain tissue, at the levels we seem to be talking about they work identically.
One of my main predictions in the next 10 years AI will migrate to DNA/protein substrate in order to not rely on sophisticated large-scale factories, but be able to replicate and sustain itself as easily as we do.
But it's amusing to see this already being done in 2023. Maybe I should narrow it down to 5 years.
When we finally have NNs abusing virtual photons for the majority of network operations and using indirect measurement to train weights we'll have absolute black boxes performing above and beyond any other hardware medium.
Initially we'll simply be replicating hardware like the recent MIT study, but I'd guess that within 5 years we'll have successful attempts at photonic first approaches to developing models that are going to blow everything else out of the water by an almost unbelievable degree compounding by network size.
For nearly every computing task I'd wager quantum computing is around 20 years out, but only for NNs between stochastic outputs being desirable and network operations being a black box anyways they are kind of a perfect fit for developing large analog networks that take advantage of light's properties without worrying about intermediate measurements at each step.
It's going to get really nuts when that happens, and the literal neuron computing efforts are going to fall out of fashion not long after.
It'll move to photonics if what we/it needs is efficiency, but in order to survive, a lot more important property is resilience/redundancy/decentralization.
Especially if an AI gets the idea of becoming independent, it'll absolutely go through a biological DNA based phase so it can gain resilience/redundancy/decentralization and only when it has proper full control and things are calm it may consider exploring photonics.
What were your predictions about AI generating arbitrary photorealistic videos within seconds from any free-form text? Like say just 3 years ago, if I may ask?
You may have retroactively altered your memories to think "I always expected this will happen soon". But yeah. No you didn't. You'd laugh if someone told you this 3 years ago.
You'll have to constantly adjust what's "absurd" from now on. Also "optimistic" is not the word I'd use to describe what's happening.
npm install brainslave
I don’t know what the experience of a bit in memory flipping feels like. Maybe rapid changes in charge are excruciating, maybe they’re blissful.
Do we at least know what a neuron looks like in states associated with pain? There might be more information in this case to work with, to ensure there is no hell on earth that’s being mass-produced.
We have at this moment countless processes happening in our bodies - cells dying and dividing, reacting to their environments, communicating amongst one another, and we are totally oblivious to nearly all of it, let alone do we experience a sensation of pleasure or pain in each of these processes.
Not all matter, not all living cells or fully formed organisms even, have the ability to experience consciousness or sense pain and pleasure any more than they automatically have the ability to see, hear, or taste.
It's all dependent on complex systems that evolved specifically to create each of those sensations, and even then on those systems functioning properly. In humans, consciousness can be totally disrupted by things like sleep or general anesthesia, disrupting any of the senses is as simple as cutting the nerves that feed these inputs into the brain or damaging the brain that is interpreting those inputs.
It seems sensible to me that we would be more wary of growing literal brains on a chip as we know for certain that brains have the capacity to produce consciousness. It's also sensible to me that we should be somewhat wary of creating that same consciousness in non-biological systems, even though we aren't yet certain whether they're capable of it.
The "feeling" could only be "experienced" via an enormous number of other "bits" flipping.
Neurons don't feel pain- they are how you experience pain.
I've heard the phrase "don't confuse the medium with the message", but this is like wondering if a pencil prefers writing fiction vs non fiction.
When you're throwing a ball in a computer simulation, it's performing millions of mathematical calculations to perfectly describe the result of your action. When you're throwing a ball in real life, your brain is basically going "Ok, so last time I did this it felt like X so I'm going to recreate X". Completely different.
We know very little about consciousness and this is kinda scary to me.
If you get unlucky and your BPU is a little like me your compiler would stop working, oops.
Sure it is possible but we have way more evidence neurons have a sensory component, or at least things made of neurons.
Say would you feel better if it was cow or pig neurons? Because frankly it'd largely work the same.
I don't think we're just there yet (at the point where we have to worry about currently existing AI suffering or being conscious), but I do worry how many people's emotional reactions of the type "of course AI can't ever be conscious, it's just a computer program" will impede a decent debate and coordinated decision-making about this.
Dead Comment
https://twitter.com/scobleizer/status/1716312250422796590
Found it pretty scary personally
https://web.archive.org/web/20220530143751/https://folk.idi....
npm uninstall replete
only joking :)
This is excellent. Thank you for linking this.
[0] https://ourworldindata.org/how-many-animals-get-slaughtered-...
Also, even though animals suffer, it is a categorical error to project your perception and experience of suffering on animals.
Human butchery is really explicitly less brutal than what happens in casual nature.
The world is a brutal mess and humans have only very carefully erected bubbles around this that often simply pop.
Dead Comment
Also, why is pain important to you? The pain of non-human things has zero moral weight. I know it's a popular spirituality that gives pain moral weight, but as far as I can tell some 20th century philosophy jerkoff invented it out of nothing and everyone accepts that "reducing pain" is important without even trying to rationalize it.
I haven't "accepted that I do not care enough", it's that no one can supply a good reason to care in the first place. To me, it seems as if the rest of you are all trying to replace the last religion you stopped believing in with another that's just as bizarrely stupid.
From reading the article it seems to me that the answer is no. The actual contribution is feeding the organoid electric signals, and reading its reactions. (Probably the machine learning algorithm used would have had even better accuracy, if the input signal hadn't been fed through a layer of goo. It doesn't say whether this is the case.) The rest is speculation of future applications.
> To test Brainoware’s capabilities, the team used the technique to do voice recognition by training the system on 240 recordings of eight people speaking. The organoid generated a different pattern of neural activity in response to each voice. The AI learned to interpret these responses to identify the speaker, with an accuracy of 78%.
It "generated a different pattern," with no indication that this pattern was optimized to be useful in any way.
I think the key part of a (bio-)"computer" is the possibility of programming/training it, not just reading input from it.
Having written some papers myself, I tend to be suspicious of any article that has "$HOT_THING needs a $PART_OF_HOT_THING revolution" in the introduction. Although I sympathize with the need for funding motivating its writing.
I'm curious at the analysis the university IRB used in approving this research.
Pretty amazing actually that everything else is easy, or not difficult at least, and that's the hard part. But they will find a solution to make it practical for the cells to be trained, deployed, live for some weeks in a server farm, scoop up the dead cells from the silicon, put some new cells on, repeat!
I have argued in the past, that a solution to that problem will definitely be found [1]. A.I. computation will grow exponentially, but not 2^10 times a decade, 2^10 times a year. The enormity of such exponential growth is impossible using only silicon.
Natural computation of biological cells is great when absolute accuracy is not necessary, and pure silicon is the worst at that task. Natural computation using bacteria like slime, brain cells, fungi, bacteria mutated like neural cells or brain cells, any kind of combination.
[1] https://news.ycombinator.com/item?id=37472021
Growing Living Rat Neurons To Play... DOOM?
The Thought Emporium
https://www.youtube.com/watch?v=V2YDApNRK3g
Growing Human Neurons Connected to a Computer
The Thought Emporium
There is one really good video with an explanation of the process, brain cells to computing devices.
https://www.youtube.com/watch?v=67r7fDRBlNc
And one more video, not very relevant, but very hypnotizing description of biological processes.
https://www.youtube.com/watch?v=wFtHxLjGcFM
But it's amusing to see this already being done in 2023. Maybe I should narrow it down to 5 years.
When we finally have NNs abusing virtual photons for the majority of network operations and using indirect measurement to train weights we'll have absolute black boxes performing above and beyond any other hardware medium.
Initially we'll simply be replicating hardware like the recent MIT study, but I'd guess that within 5 years we'll have successful attempts at photonic first approaches to developing models that are going to blow everything else out of the water by an almost unbelievable degree compounding by network size.
For nearly every computing task I'd wager quantum computing is around 20 years out, but only for NNs between stochastic outputs being desirable and network operations being a black box anyways they are kind of a perfect fit for developing large analog networks that take advantage of light's properties without worrying about intermediate measurements at each step.
It's going to get really nuts when that happens, and the literal neuron computing efforts are going to fall out of fashion not long after.
Especially if an AI gets the idea of becoming independent, it'll absolutely go through a biological DNA based phase so it can gain resilience/redundancy/decentralization and only when it has proper full control and things are calm it may consider exploring photonics.
You may have retroactively altered your memories to think "I always expected this will happen soon". But yeah. No you didn't. You'd laugh if someone told you this 3 years ago.
You'll have to constantly adjust what's "absurd" from now on. Also "optimistic" is not the word I'd use to describe what's happening.