Readit News logoReadit News
mikpanko · 2 years ago
I did a PhD in brain-computer interfaces, including EEG and implanted electrodes. BCI research to a big extent focuses on helping paralyzed individuals regain communication.

Unfortunately, EEG doesn’t provide sufficient signal-to-noise ratio to support good communication speeds outside of the lab with Faraday cages and days/weeks of de-noising including removing eye-movement artifacts in the recordings. This is a physical limit due to attenuation of brain’s electrical fields outside of the skull, which is hard to overcome. For example, all commercial “mind-reading” toys are actually working based off head and eye muscle signals.

Implanted electrodes provide better signal but are many iterations away from becoming viable commercially. Signal degrades over months as the brain builds scar tissue around electrodes and the brain surgery is obviously pretty dangerous. Iteration cycles are very slow because of the need for government approval for testing in humans (for a good reason).

If I wanted to help a paralyzed friend, who could only move his/her eyes, I would definitely focus on the eye-tracking tech. It hands-down beat all BCIs I’ve heard of.

daniel_iversen · 2 years ago
What’s your thoughts of Elon’s NeuraLink? Also, do you have an opinion on whether good AI algorithms (like in the article) can help filter out or parse a lot of the noise?
laserbeam · 2 years ago
In my understanding, NeuraLink is just a research project thaf musk invested into and did some PR for. I wouldn't read into it more than that. Like any other similar BCI research project feel free to ignore it until papers are published. That is, unless you are involved in the field.
dsr_ · 2 years ago
It was a little bundle of what looked like thin, glisteningly blue threads, lying in a shallow bowl; a net, like something you'd put on the end of a stick and go fishing for little fish in a stream. She tried to pick it up; it was impossibly slinky and the material slipped through her fingers like oil; the holes in the net were just too small to put a finger-tip through. Eventually she had to tip the bowl up and pour the blue mesh into her palm. It was very light. Something about it stirred a vague memory in her, but she couldn't recall what it was. She asked the ship what it was, via her neural lace.

That is a neural lace, it informed her. ~ A more exquisite and economical method of torturing creatures such as yourself has yet to be invented.

NeuroCoder · 2 years ago
The problem with using AI to filter and denoise is that the things we clearly know are unwanted noise are more quickly removed through other means (I can run the fully automated part of processing EEG data in under an hour with my code). The laborious part is quality control related to more subjective things that research is still figuring out what is important.
AndrewKemendo · 2 years ago
I just did a two day ambulatory eeg and noted anytime I did anything that would be electrically noisy.

For example going through a metal detector or handling a phone.

Unsurprisingly one of their biggest sources of noise is handling a plugged in phone.

I think something like an EEG faraday beanie would actually work and adding accessory egocentric video would allow doctors to filter a lot of the noise out.

Deleted Comment

caycep · 2 years ago
also, just blinking or tensing muscles on your scalp
bbor · 2 years ago
...This seems really, really confidently dismissing a new technology as impossible, which of course this forum loves (b/c GPT). This very paper seems like pretty damn strong evidence that prehaps the signal-to-noise ratio of EEG might be coming down as we get better algorithms.
logtempo · 2 years ago
Recently, a Swiss-French team made the communication between the brain and the legs possible, and the device looked relatively mature. I think patient had nerves damaged in the vertebral column. What do you think about it ? Looked like a promising development.

https://actu.epfl.ch/news/thought-controlled-walking-again-a...

fragmede · 2 years ago
Not OP, but Ttat article's use of the word "implant" implies a much more invasive device, which means a way better signal. Additionally, the output, while still complex, is far from the level of decoding of thought presented here. Thus, while still impressive, it is less of a leap from existing technology, and much more within the realm of what we know is very much possible.
kromem · 2 years ago
While I believe you that the signal to noise is terrible, I have strong suspicions that given enough data we'll still be seeing surprising advancements in spite of that.

The ability of deep learning to tease back out signal from noise, such as reversing what's being typed from room audio with a keyboard in it, shouldn't be underestimated.

The biggest challenge may be that EEG data correlated with signals is relatively expensive to generate, so there's not going to be anything like millions of hours of people looking at or processing known things to throw into a model.

Whereas we're about to have massive increases in eye tracking data as it becomes a central component to new consumer hardware.

drzzhan · 2 years ago
What is it noise-to-signal ratio? Sorry I don't know much about the field but that sounds like something can shutdown ideas like "we can put eeg into transformer and it will work". So may I ask what reference papers that I need to know on this?
southerntofu · 2 years ago
Not from that field, but "reading" the brain means electromagnetism. In real life, EM interference is everywhere from lights, electric devices, cellphone towers... EVERYWHERE. Parent meant brain waves are weak to detect compared to all surrounding interference, except when a lab faraday cage blocks outside interference then the brain becomes "loud" enough to be read.

https://en.wikipedia.org/wiki/Signal-to-noise_ratio

https://en.wikipedia.org/wiki/Faraday_cage

IshKebab · 2 years ago
Signal to noise ratio is a very basic thing; you can Google it.
teaearlgraycold · 2 years ago
I think then VR headsets will become medical devices soon enough
joenot443 · 2 years ago
Ground Truth: Bob attended the University of Texas at Austin where he graduated, Phi Beta Kappa with a Bachelor’s degree in Latin American Studies in 1973, taking only two and a half years to complete his work, and obtaining generally excel- lent grades.

Predict: was the University of California at Austin in where he studied in Beta Kappa in a degree of degree in history American Studies in 1975. and a one classes a half years to complete the degree. and was a excellent grades.

Wow. That seems comparable to the rudimentary _voice_ to text systems of the 70s and 80s. The brain interface is quickly leaving the realm of sci-fi and becoming a reality. I’m still not sure how I feel about it.

PaulScotti · 2 years ago
Guys Figure 1 is not real results, it's an illustration of the "goal" of the paper. The real results are in Table 3. And are much worse.
explaininjs · 2 years ago
Interesting ploy. Present far-better-than-achieved results right on the front page with no text to explain their origin^, but make them poor enough quality to make it seem as if they might be real.

^ "Overall illustration of translate EEG waves into text through quantised encoding." doesn't count.

oldesthacker · 2 years ago
The results of Table 3 are not really exciting. Could this change with 100 times more data? The key novelty in the specific context of this particular application is the quantized variational encoder used "to derive discrete codex encoding and align it with pre-trained language models."
seydor · 2 years ago
Why is it such a "pattern" in these brain-computer papers that the authors keep making wild clickbait claims. Last year it was the DishBrain paper, which caused a lot of reactions, as it referred to the tiny system as "sentient" (https://hal.science/hal-04012408)

This year it is the "Brainoware" which is claimed to do speech recognition , and now this.

Deleted Comment

derefr · 2 years ago
Seems like it could work a lot better still, very quickly, just by merging the trained model with an LLM trained on the language they expect the person to be thinking in. I.e. try to get an equilibrium between the "bottom-up processing" of what the TTS model believes the person "is thinking", and the "top-down processing" of what the grammar model believes the average person "would say next" given all the conversation so far. (Just like a real neocortex!)

Come to think, you could even train the LLM with a corpus of the person's own transcribed conversations, if you've got it. Then it'd be serving almost exactly the function of predicting "what that person in particular would say at this point."

Maybe you could even find some additional EEG-pad locations that could let you read out the electrical consequences of AMPAR vs NMDAR agonism within the brain; determine from that how much the person is currently relying on their own internal top-down speech model vs using their own internal bottom-up processing to form a weird novel statement they've never thought before; and use this info to weight the level of influence the TTS model has vs the LLM on the output.

seydor · 2 years ago
> I’m still not sure how I feel about it.

Sir, let us read that for you

api · 2 years ago
Just be sure to only ever use open source or paid commercial grade tech. I’m sure someone will release a “free” BCI that spies on you as much as possible.
samstave · 2 years ago
this podcast is excellent in discussing the future we are racing into.

https://www.youtube.com/watch?v=OSV7cxma6_s

>"Peter Diamandis, the futurist to watch as all of these technologies advance with unimaginable speed, is going to blow your mind and help you imagine new possibilities and opportunities for your healthspan."

varispeed · 2 years ago
Well you are going to have a brain scanning device directly linked to your social credit score.

That's the future.

MoSattler · 2 years ago
First use will be for criminal suspects, to "save lives". Then its use slowly expands from there.
alternatex · 2 years ago
Being banned in the EU as we speak.
WendyTheWillow · 2 years ago
No, it’s not. Good lord…
6510 · 2 years ago
For a while, eventually we will become so suggestible you'd wish you were special enough to have a score.
garbagewoman · 2 years ago
Why are you so certain that’s the future?
nextworddev · 2 years ago
The “Matrix” stack is really shaping up recently /s

Dead Comment

karaterobot · 2 years ago
> While it’s not the first technology to be able to translate brain signals into language, it’s the only one so far to require neither brain implants nor access to a full-on MRI machine.

I wonder whether, in a decade or two, if the sensor technology has gotten good enough that they don't even need you to wear a cap, just there'll be people saying "obviously you don't have any reasonable expectation of not having your thoughts read in a public space, don't be ridiculous". What I mean is, we just tend to normalize surveillance technology, and I wonder if there's any practical limit to how far that can go.

simcop2387 · 2 years ago
I think this is when we start wearing tin foil hats
kromem · 2 years ago
Not with brain signals reading, but with aggregate dats processing most things about you will be known by any centralized processing.

Over a decade ago were the stories about how Target's loyalty program algorithm had discovered a teen was pregnant before she'd told her family, based on correlative purchase changes (like switching from scented to unscented candles).

If I could take your social media, face and eye tracking on CCTV, phone gyroscope data, purchase history, search history, and the same from all your associated contacts, with a broad enough comparative data set I could probably identify all kinds of skeletons from all kinds of closets.

It's a bit like the "my phone is listening to my conversations" freak out. It's not, but the thing you should be much more concerned about is that it has such an accurate picture of what you end up talking about without needing to be listening in the first place.

SoftTalker · 2 years ago
We are still operating computers the same way we did in the 1970s: keyboards and screens. I'm not holding my breath.
quickthrower2 · 2 years ago
No we are not. Voice and touchscreens now for much computer usage.
dexwiz · 2 years ago
Everyone is this thread immediately went to mind readers as interrogation. But what about introspection? Many forms of teaching and therapy exist because we are incapable of self analyzing in a completely objective way.

Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements. You could find which teaching techniques are actually the most effective. You could objectively find when you are most and least focused. You could pinpoint when anxious thoughts began and their trigger. And best of all, you could do this personally, with a partner, or in a group based on your choice.

Also you can give someone an FMRI as a brain scanning polygraph today. But there are still a ton of questions about it’s legitimacy.

https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?art...

electrondood · 2 years ago
> Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements.

Typing in a journal text file for 15 minutes every morning is already a thing... and it's free.

dexwiz · 2 years ago
Thoughts are fleeting. 15 minutes could be filled with hundreds or thousands of distinct concepts. Not to mention active recording is different from passive observation.
__MatrixMan__ · 2 years ago
Yes, but it could be expensive.
wruza · 2 years ago
I found it completely useless in my therapy seasons. These trains of thoughts are more like hallucination than real thoughts, because you think different at writing than during the day. I’m not even sure if keeping a diary makes you understand yourself better or just become more coherent with your delusions.
MadSudaca · 2 years ago
Fear is a strong emotion, and while we know little of what we may gain from this, we know a lot of what we stand to lose.
wruza · 2 years ago
Automatic logs would be cool. It’s not only introspection itself that is hard but also that you have to remember to introspect and write down events for further analysis. Assuming you can trust the precision.
cantsingh · 2 years ago
Ugh, exactly. I'd kill to get a stacktrace of the mind.
demondemidi · 2 years ago
> Everyone is this thread immediately went to mind readers as interrogation.

Yes.

It's kind of hard to think about the upside of no longer having private thoughts.

I'm amazed you can go right to its benefits without realizing the universe-sized hole in the ethics of this.

But then again, that's techbro nature.

dexwiz · 2 years ago
Every technology has its ups and downs. The same nitrates can grow plants or blow up a building. While blind optimism is bad, it’s depressing how negative discussion surrounding new technology has become. People have become literal luddites.
hyperific · 2 years ago
Reminds me of DARPA "Silent Talk" from 14 years ago. The objective was to "allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals"

https://www.engadget.com/2009-05-14-darpa-working-on-silent-...

baby · 2 years ago
Dragon ball did this way before
lamerose · 2 years ago
Subvocal speech recognition has been going just as long.
giancarlostoro · 2 years ago
This is very impressive and useful, and horrifying all at once.

I imagine it would help a stroke patient, I also imagine it would give out unfiltered thoughts, which might be troublesome.

rvnx · 2 years ago
I agree sadly :(

You're right, this is why in year 2200, your job application is going to be fast-tracked by analyzing your thoughts directly.

If you have a Neuralink, no problems, you can directly upload a trace of thoughts.

In case you have wrong thoughts, don't worry, we have rehabilitation school, which can alter your state of mind.

Don't forget to be happy, it's forbidden to be sad.

Also, this is read-only for now, but what about writing ?

This could open new possibilities as well (real-life Matrix ?)

Oh by the way, did you hear about Lightspeed Briefs ?

==

All that being said, it's great research and going to be useful. Just the potential of abuse from politics is huge over the long-term.

derefr · 2 years ago
> If you have a Neuralink, no problems, you can directly upload a trace of thoughts.

Except that someone with a jailbroken Neuralink could upload a filtered and arbitrarily-modified thought trace, getting ahead of all those plebs. Cyberpunk! :)

SubiculumCode · 2 years ago
When your bosses require you to wear one of these while working from home.
wruza · 2 years ago
Hopefully in 2200 we’ll be long gone from all important fields in favor of new tech species who lack all the bs humans inherently bear on each other. If not, our own fault. Our species is a thin layer of culture on top of the “who most dominant ape here” game.
da_chicken · 2 years ago
Yeah I can imagine law enforcement and employers are going to love this.

As much as this is an unimaginable positive benefit to people who are locked in, this is definitely one of those stories that makes me think "Stop inventing the Torment Nexus!"

Jensson · 2 years ago
> Yeah I can imagine law enforcement and employers are going to love this.

They will hate it, lies always benefits those with power more than those without since when the police lies against you then there isn't much you could do before, now you could demand they get their thoughts read.

ninjaa · 2 years ago
I am happy though, that we finally may talk about what our unfiltered thoughts are, and how much we are expected to control them or curate them, and how to do so in a psychologically helpful way.
Jensson · 2 years ago
Imagine putting these on presidential candidates as they debate or when they try to explain a bill, it could massively improve democracy and ensure the people know what they actually vote for.
thfuran · 2 years ago
Yes, imagine the glorious future of politicians who have no thoughts beyond the repeatedly coached answers to various talking points.
d-lisp · 2 years ago
Yes, and only politicians that do truly know to lie are elected.
notnmeyer · 2 years ago
> unfiltered thoughts

not far off from existing issues like some forms of tourette’s.

motohagiography · 2 years ago
Maybe I'm missing something huge here but a blind controlled demo where the subject committed words to paper and then compared the results afterwards would be persuasive. Unfortunately, the demo as presented in the article seemed achievable by professional magicians and mentalists.

I'm sure we're close to brain interfaces, but something seems off about this one.

Let's say a couple years from now, someone invents an airport scanner that "detects" evil thoughts, except there is no way to verify it, and no accountability for false negatives. The result is whatever the operator says it is. If enough people accept it enough to not resist it, and even turn on the ones who are detected by it, it doesn't matter what's real because it's just a participatory ritual of sympathetic magic. I feel like there are examples of similar dynamics in recent memory.

odyssey7 · 2 years ago
I wonder if a-linguistic thought could work too. Maybe figure out what your dog is thinking or dreaming about, based on a dataset of signals associated with their everyday activities.

It seems like outputting a representation of embodied experience would be a difficult challenge to get right and interpret, though perhaps a dataset of signals associated with embodied experiences could more readily be robustly annotated with linguistic descriptions using a vision-to-language model, so that the canine mind reader could predict and output those linguistic descriptions instead.

Imagine knowing the specific park your dog wants to go to, or the subtle early signs of an illness or injury they're noticing, or what treat your dog wants you to buy.