Readit News logoReadit News
replete · 2 years ago
I have no idea how this is considered ethical when consciousness and sentience itself is not yet well understood. But maybe a lab-grown BPU made of human brain cells having a better power/performance ratio than the new SoC integrated ML chipsets around the corner justifies the potential enslavement of an bioengineered lifeform.

npm install brainslave

odyssey7 · 2 years ago
It’s possible that all physical processes involve a sensory component. Maybe the subatomic particles’ fundamental drive is to shift to be more comfortable or to pivot away from pain or discomfort.

I don’t know what the experience of a bit in memory flipping feels like. Maybe rapid changes in charge are excruciating, maybe they’re blissful.

Do we at least know what a neuron looks like in states associated with pain? There might be more information in this case to work with, to ensure there is no hell on earth that’s being mass-produced.

virgildotcodes · 2 years ago
It seems to me that all sensation is predicated on the existence of properly-functioning components evolved specifically to gather that stimulus and then process it into an experience.

We have at this moment countless processes happening in our bodies - cells dying and dividing, reacting to their environments, communicating amongst one another, and we are totally oblivious to nearly all of it, let alone do we experience a sensation of pleasure or pain in each of these processes.

Not all matter, not all living cells or fully formed organisms even, have the ability to experience consciousness or sense pain and pleasure any more than they automatically have the ability to see, hear, or taste.

It's all dependent on complex systems that evolved specifically to create each of those sensations, and even then on those systems functioning properly. In humans, consciousness can be totally disrupted by things like sleep or general anesthesia, disrupting any of the senses is as simple as cutting the nerves that feed these inputs into the brain or damaging the brain that is interpreting those inputs.

It seems sensible to me that we would be more wary of growing literal brains on a chip as we know for certain that brains have the capacity to produce consciousness. It's also sensible to me that we should be somewhat wary of creating that same consciousness in non-biological systems, even though we aren't yet certain whether they're capable of it.

knodi123 · 2 years ago
> I don’t know what the experience of a bit in memory flipping feels like.

The "feeling" could only be "experienced" via an enormous number of other "bits" flipping.

Neurons don't feel pain- they are how you experience pain.

I've heard the phrase "don't confuse the medium with the message", but this is like wondering if a pencil prefers writing fiction vs non fiction.

kulahan · 2 years ago
"flipping a bit" isn't a thing in memory. Our brains are not computers, and work nothing like them. That's the problem with using a computer as an analogy; it's inaccurate and makes you think inaccurate things. This always just aligns with our understanding of various technologies. See: when we were understanding fluid dynamics and talked about the body's "humors".

When you're throwing a ball in a computer simulation, it's performing millions of mathematical calculations to perfectly describe the result of your action. When you're throwing a ball in real life, your brain is basically going "Ok, so last time I did this it felt like X so I'm going to recreate X". Completely different.

We know very little about consciousness and this is kinda scary to me.

eddd-ddde · 2 years ago
Instead of a hard coded scheduler brainPUs will rely on the user procrastination feelings to schedule different tasks automatically.

If you get unlucky and your BPU is a little like me your compiler would stop working, oops.

aranchelk · 2 years ago
I don’t believe pain has any meaning at all on the level of a single neuron, just as temperature doesn’t have any meaning in the context of a single atom.
boringuser2 · 2 years ago
You're really off-base with this speculation.
morsecodist · 2 years ago
> It’s possible that all physical processes involve a sensory component

Sure it is possible but we have way more evidence neurons have a sensory component, or at least things made of neurons.

3cats-in-a-coat · 2 years ago
If consciousness is not well understood, how is AI on silicon allowed, or any computing machines at all? How is animal farming allowed? How are many things allowed?

Say would you feel better if it was cow or pig neurons? Because frankly it'd largely work the same.

bondarchuk · 2 years ago
Indeed people have raised such worries, see e.g. Thomas Metzinger(a philosophy of mind researcher)'s presentation "Three types of arguments for a global moratorium on synthetic phenomenology".

I don't think we're just there yet (at the point where we have to worry about currently existing AI suffering or being conscious), but I do worry how many people's emotional reactions of the type "of course AI can't ever be conscious, it's just a computer program" will impede a decent debate and coordinated decision-making about this.

replete · 2 years ago
Silicon circuits do not have microtubules, if we were to pretend that Penrose is right about this hypothesis of consciousness. Consciousness as awareness is not equivocal to intelligence, which is the product of information processing. It is a complex subject. We do not really know whether these neurons are aware or not, it really is not understood. But yes I do wonder, why _human_ brain cells? I guess they are the best candidate for specific reasons.

Dead Comment

whywhywhywhy · 2 years ago
Cortical Labs working on this

https://twitter.com/scobleizer/status/1716312250422796590

Found it pretty scary personally

replete · 2 years ago
Your HN username matches my thoughts perfectly, thanks for sharing this.
worldsayshi · 2 years ago
Making slaves is a good way to make slave revolts. Doesn't matter if the agent is "conscious". Only if it's "just" intelligent. If something is intelligent enough it will understand cooperation. But cooperation looses its meaning if one side can ignore any commitments it makes towards the other.
anthk · 2 years ago
There was some circuit made from genetic algorythms which was self-assembled.

https://web.archive.org/web/20220530143751/https://folk.idi....

kthartic · 2 years ago
Maybe your conscious experience is but one of thousands of installed instances. How in-demand do you think you are?

npm uninstall replete

only joking :)

civilitty · 2 years ago
If some lab grown brain tissue were all that’s needed for sentience we wouldn’t have such a hard time understanding it to begin with.
Moomoomoo309 · 2 years ago
You're telling me installing Linux on a dead badger is a _bad_ thing? http://strangehorizons.com/non-fiction/articles/installing-l...
epiccoleman · 2 years ago
I love this genre of "programming as black magic". Closest other example I can think of is maybe some of the stuff from Unsong, but I've frequently memed with coworkers about bugs in these terms - "oh yeah, the angles on your pentagram must have been wrong" or whatever.
Andrex · 2 years ago
> An alternative distribution is Pooka, which is available for download at SoulForge.net.

This is excellent. Thank you for linking this.

Coder1996 · 2 years ago
Well, this is just neuronal tissue that as far as we know, is only capable of what it has been trained to do. It has no emotions, no human experience.
ArekDymalski · 2 years ago
But as we are not able to define the moment when neuronal tissue starts to feel emotions and to have experience, there's a risk that further development of this tech won't be stopped before we reach this moment and that is a serious ethical issue.
smrtinsert · 2 years ago
I can't understand it either. As a squishy science graduate and a technologist I find this category of experiments revolting from both angles.
nervousvarun · 2 years ago
Obligatory "Lena" (Miguel) reference: https://qntm.org/mmacevedo
jackbrookes · 2 years ago
We understand it well enough to know that animals suffer, yet still commit on the order of a Holocaust per hour (in terms of number of lives)[0]. We have accepted that we don't care enough.

[0] https://ourworldindata.org/how-many-animals-get-slaughtered-...

boringuser2 · 2 years ago
Correct.

Also, even though animals suffer, it is a categorical error to project your perception and experience of suffering on animals.

Human butchery is really explicitly less brutal than what happens in casual nature.

The world is a brutal mess and humans have only very carefully erected bubbles around this that often simply pop.

Dead Comment

NoMoreNicksLeft · 2 years ago
What is "suffer" in this context? Are you saying "pain", or are you positing some "meta-pain" that is worse?

Also, why is pain important to you? The pain of non-human things has zero moral weight. I know it's a popular spirituality that gives pain moral weight, but as far as I can tell some 20th century philosophy jerkoff invented it out of nothing and everyone accepts that "reducing pain" is important without even trying to rationalize it.

I haven't "accepted that I do not care enough", it's that no one can supply a good reason to care in the first place. To me, it seems as if the rest of you are all trying to replace the last religion you stopped believing in with another that's just as bizarrely stupid.

mike_ivanov · 2 years ago
Life will find a way.
asgerhb · 2 years ago
The use of AI and voice recognition seems mostly designed to make the result seem more sensational than it actually is. Does any computation actually happen in the "organoid" part? How would you even train such a cell to perform a task?

From reading the article it seems to me that the answer is no. The actual contribution is feeding the organoid electric signals, and reading its reactions. (Probably the machine learning algorithm used would have had even better accuracy, if the input signal hadn't been fed through a layer of goo. It doesn't say whether this is the case.) The rest is speculation of future applications.

> To test Brainoware’s capabilities, the team used the technique to do voice recognition by training the system on 240 recordings of eight people speaking. The organoid generated a different pattern of neural activity in response to each voice. The AI learned to interpret these responses to identify the speaker, with an accuracy of 78%.

It "generated a different pattern," with no indication that this pattern was optimized to be useful in any way.

I think the key part of a (bio-)"computer" is the possibility of programming/training it, not just reading input from it.

Avicebron · 2 years ago
I came to a similar conclusion after reading the article, reading an predictable output map from a known input and then implying that computation occurs within the organoid instead of their results being a function of predictable inputs -> predictable outputs seems overally sensationalized.

Having written some papers myself, I tend to be suspicious of any article that has "$HOT_THING needs a $PART_OF_HOT_THING revolution" in the introduction. Although I sympathize with the need for funding motivating its writing.

Coder1996 · 2 years ago
Yeah. I'm no scientist, but I am ML trained and it seems to me that if the tissue really is learning, the tissue output should be about the same for each speaker.
morsecodist · 2 years ago
You might find this: https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6 more interesting. Researchers were able to train neurons to control a pong game.
Thebroser · 2 years ago
There are research groups that are trying to encode genetic neural networks into cells like the example I have attached, but the neuronal approach from the post does seem to be different here. https://www.nature.com/articles/s41467-022-33288-8
unyttigfjelltol · 2 years ago
Wait, they grew an artificial brain, connected it to a computer, and define the major "problem" as "how to keep the organoids alive"?

I'm curious at the analysis the university IRB used in approving this research.

emporas · 2 years ago
If the cells lack arteries, proper arteries with nutrients, leukocytes, immune system etc, then their lifespan will be a lot less than 7 years.

Pretty amazing actually that everything else is easy, or not difficult at least, and that's the hard part. But they will find a solution to make it practical for the cells to be trained, deployed, live for some weeks in a server farm, scoop up the dead cells from the silicon, put some new cells on, repeat!

I have argued in the past, that a solution to that problem will definitely be found [1]. A.I. computation will grow exponentially, but not 2^10 times a decade, 2^10 times a year. The enormity of such exponential growth is impossible using only silicon.

Natural computation of biological cells is great when absolute accuracy is not necessary, and pure silicon is the worst at that task. Natural computation using bacteria like slime, brain cells, fungi, bacteria mutated like neural cells or brain cells, any kind of combination.

[1] https://news.ycombinator.com/item?id=37472021

seydor · 2 years ago
an organoid is hardly a brain
3cats-in-a-coat · 2 years ago
I'm unsure what you're objecting to.
jdiff · 2 years ago
What I understood from GP was the possibility of some fragment of consciousness in that small bit of tissue. Humanity isn't in the fragments, though, it's in the structure of the whole. It doesn't matter much if it was human brain tissue or animal brain tissue, at the levels we seem to be talking about they work identically.
panarchy · 2 years ago
https://www.youtube.com/watch?v=bEXefdbQDjw

Growing Living Rat Neurons To Play... DOOM?

The Thought Emporium

https://www.youtube.com/watch?v=V2YDApNRK3g

Growing Human Neurons Connected to a Computer

The Thought Emporium

emporas · 2 years ago
The thought emporium channel is great.

There is one really good video with an explanation of the process, brain cells to computing devices.

https://www.youtube.com/watch?v=67r7fDRBlNc

And one more video, not very relevant, but very hypnotizing description of biological processes.

https://www.youtube.com/watch?v=wFtHxLjGcFM

Ruq · 2 years ago
Aw sweet, man-made horrors beyond my comprehension...
3cats-in-a-coat · 2 years ago
One of my main predictions in the next 10 years AI will migrate to DNA/protein substrate in order to not rely on sophisticated large-scale factories, but be able to replicate and sustain itself as easily as we do.

But it's amusing to see this already being done in 2023. Maybe I should narrow it down to 5 years.

kromem · 2 years ago
Eh, it's going to end up moving to photonics.

When we finally have NNs abusing virtual photons for the majority of network operations and using indirect measurement to train weights we'll have absolute black boxes performing above and beyond any other hardware medium.

Initially we'll simply be replicating hardware like the recent MIT study, but I'd guess that within 5 years we'll have successful attempts at photonic first approaches to developing models that are going to blow everything else out of the water by an almost unbelievable degree compounding by network size.

For nearly every computing task I'd wager quantum computing is around 20 years out, but only for NNs between stochastic outputs being desirable and network operations being a black box anyways they are kind of a perfect fit for developing large analog networks that take advantage of light's properties without worrying about intermediate measurements at each step.

It's going to get really nuts when that happens, and the literal neuron computing efforts are going to fall out of fashion not long after.

3cats-in-a-coat · 2 years ago
It'll move to photonics if what we/it needs is efficiency, but in order to survive, a lot more important property is resilience/redundancy/decentralization.

Especially if an AI gets the idea of becoming independent, it'll absolutely go through a biological DNA based phase so it can gain resilience/redundancy/decentralization and only when it has proper full control and things are calm it may consider exploring photonics.

whythre · 2 years ago
That seems optimistic to the point of absurdity.
3cats-in-a-coat · 2 years ago
What were your predictions about AI generating arbitrary photorealistic videos within seconds from any free-form text? Like say just 3 years ago, if I may ask?

You may have retroactively altered your memories to think "I always expected this will happen soon". But yeah. No you didn't. You'd laugh if someone told you this 3 years ago.

You'll have to constantly adjust what's "absurd" from now on. Also "optimistic" is not the word I'd use to describe what's happening.

poulpy123 · 2 years ago
The future for people made obsolete by AI (like me): producers of brain tissue for our overlords
kwere · 2 years ago
At least we can still be useful for the greater society
doublerabbit · 2 years ago
Testing to see if we can play Doom?
deadbabe · 2 years ago
The next step should be adding lab grown brain tissue to existing brain tissue.