I did a PhD in brain-computer interfaces, including EEG and implanted electrodes. BCI research to a big extent focuses on helping paralyzed individuals regain communication.
Unfortunately, EEG doesn’t provide sufficient signal-to-noise ratio to support good communication speeds outside of the lab with Faraday cages and days/weeks of de-noising including removing eye-movement artifacts in the recordings. This is a physical limit due to attenuation of brain’s electrical fields outside of the skull, which is hard to overcome. For example, all commercial “mind-reading” toys are actually working based off head and eye muscle signals.
Implanted electrodes provide better signal but are many iterations away from becoming viable commercially. Signal degrades over months as the brain builds scar tissue around electrodes and the brain surgery is obviously pretty dangerous. Iteration cycles are very slow because of the need for government approval for testing in humans (for a good reason).
If I wanted to help a paralyzed friend, who could only move his/her eyes, I would definitely focus on the eye-tracking tech. It hands-down beat all BCIs I’ve heard of.
What’s your thoughts of Elon’s NeuraLink? Also, do you have an opinion on whether good AI algorithms (like in the article) can help filter out or parse a lot of the noise?
In my understanding, NeuraLink is just a research project thaf musk invested into and did some PR for. I wouldn't read into it more than that. Like any other similar BCI research project feel free to ignore it until papers are published. That is, unless you are involved in the field.
It was a little bundle of what looked like thin, glisteningly blue threads, lying in a shallow bowl; a net, like something you'd put on the end of a stick and go fishing for little fish in a stream. She tried to pick it up; it was impossibly slinky and the material slipped through her fingers like oil; the holes in the net were just too small to put a finger-tip through. Eventually she had to tip the bowl up and pour the blue mesh into her palm. It was very light. Something about it stirred a vague memory in her, but she couldn't recall what it was. She asked the ship what it was, via her neural lace.
That is a neural lace, it informed her. ~ A more exquisite and economical method of torturing creatures such as yourself has yet to be invented.
The problem with using AI to filter and denoise is that the things we clearly know are unwanted noise are more quickly removed through other means (I can run the fully automated part of processing EEG data in under an hour with my code). The laborious part is quality control related to more subjective things that research is still figuring out what is important.
I just did a two day ambulatory eeg and noted anytime I did anything that would be electrically noisy.
For example going through a metal detector or handling a phone.
Unsurprisingly one of their biggest sources of noise is handling a plugged in phone.
I think something like an EEG faraday beanie would actually work and adding accessory egocentric video would allow doctors to filter a lot of the noise out.
...This seems really, really confidently dismissing a new technology as impossible, which of course this forum loves (b/c GPT). This very paper seems like pretty damn strong evidence that prehaps the signal-to-noise ratio of EEG might be coming down as we get better algorithms.
Recently, a Swiss-French team made the communication between the brain and the legs possible, and the device looked relatively mature. I think patient had nerves damaged in the vertebral column. What do you think about it ? Looked like a promising development.
Not OP, but Ttat article's use of the word "implant" implies a much more invasive device, which means a way better signal. Additionally, the output, while still complex, is far from the level of decoding of thought presented here. Thus, while still impressive, it is less of a leap from existing technology, and much more within the realm of what we know is very much possible.
While I believe you that the signal to noise is terrible, I have strong suspicions that given enough data we'll still be seeing surprising advancements in spite of that.
The ability of deep learning to tease back out signal from noise, such as reversing what's being typed from room audio with a keyboard in it, shouldn't be underestimated.
The biggest challenge may be that EEG data correlated with signals is relatively expensive to generate, so there's not going to be anything like millions of hours of people looking at or processing known things to throw into a model.
Whereas we're about to have massive increases in eye tracking data as it becomes a central component to new consumer hardware.
What is it noise-to-signal ratio? Sorry I don't know much about the field but that sounds like something can shutdown ideas like "we can put eeg into transformer and it will work". So may I ask what reference papers that I need to know on this?
Not from that field, but "reading" the brain means electromagnetism. In real life, EM interference is everywhere from lights, electric devices, cellphone towers... EVERYWHERE. Parent meant brain waves are weak to detect compared to all surrounding interference, except when a lab faraday cage blocks outside interference then the brain becomes "loud" enough to be read.
Ground Truth: Bob attended the University of Texas at Austin where he graduated, Phi Beta Kappa with a Bachelor’s degree in Latin American Studies in 1973, taking only two and a half years to complete his work, and obtaining generally excel- lent grades.
Predict: was the University of California at Austin in where he studied in Beta Kappa in a degree of degree in history American Studies in 1975. and a one classes a half years to complete the degree. and was a excellent grades.
Wow. That seems comparable to the rudimentary _voice_ to text systems of the 70s and 80s. The brain interface is quickly leaving the realm of sci-fi and becoming a reality. I’m still not sure how I feel about it.
Interesting ploy. Present far-better-than-achieved results right on the front page with no text to explain their origin^, but make them poor enough quality to make it seem as if they might be real.
^ "Overall illustration of translate EEG waves into text through quantised encoding." doesn't count.
The results of Table 3 are not really exciting. Could this change with 100 times more data? The key novelty in the specific context of this particular application is the quantized variational encoder used "to derive discrete codex encoding and align it with pre-trained language models."
Why is it such a "pattern" in these brain-computer papers that the authors keep making wild clickbait claims. Last year it was the DishBrain paper, which caused a lot of reactions, as it referred to the tiny system as "sentient" (https://hal.science/hal-04012408)
This year it is the "Brainoware" which is claimed to do speech recognition , and now this.
Seems like it could work a lot better still, very quickly, just by merging the trained model with an LLM trained on the language they expect the person to be thinking in. I.e. try to get an equilibrium between the "bottom-up processing" of what the TTS model believes the person "is thinking", and the "top-down processing" of what the grammar model believes the average person "would say next" given all the conversation so far. (Just like a real neocortex!)
Come to think, you could even train the LLM with a corpus of the person's own transcribed conversations, if you've got it. Then it'd be serving almost exactly the function of predicting "what that person in particular would say at this point."
Maybe you could even find some additional EEG-pad locations that could let you read out the electrical consequences of AMPAR vs NMDAR agonism within the brain; determine from that how much the person is currently relying on their own internal top-down speech model vs using their own internal bottom-up processing to form a weird novel statement they've never thought before; and use this info to weight the level of influence the TTS model has vs the LLM on the output.
Just be sure to only ever use open source or paid commercial grade tech. I’m sure someone will release a “free” BCI that spies on you as much as possible.
>"Peter Diamandis, the futurist to watch as all of these technologies advance with unimaginable speed, is going to blow your mind and help you imagine new possibilities and opportunities for your healthspan."
> While it’s not the first technology to be able to translate brain signals into language, it’s the only one so far to require neither brain implants nor access to a full-on MRI machine.
I wonder whether, in a decade or two, if the sensor technology has gotten good enough that they don't even need you to wear a cap, just there'll be people saying "obviously you don't have any reasonable expectation of not having your thoughts read in a public space, don't be ridiculous". What I mean is, we just tend to normalize surveillance technology, and I wonder if there's any practical limit to how far that can go.
Not with brain signals reading, but with aggregate dats processing most things about you will be known by any centralized processing.
Over a decade ago were the stories about how Target's loyalty program algorithm had discovered a teen was pregnant before she'd told her family, based on correlative purchase changes (like switching from scented to unscented candles).
If I could take your social media, face and eye tracking on CCTV, phone gyroscope data, purchase history, search history, and the same from all your associated contacts, with a broad enough comparative data set I could probably identify all kinds of skeletons from all kinds of closets.
It's a bit like the "my phone is listening to my conversations" freak out. It's not, but the thing you should be much more concerned about is that it has such an accurate picture of what you end up talking about without needing to be listening in the first place.
Everyone is this thread immediately went to mind readers as interrogation. But what about introspection? Many forms of teaching and therapy exist because we are incapable of self analyzing in a completely objective way.
Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements. You could find which teaching techniques are actually the most effective. You could objectively find when you are most and least focused. You could pinpoint when anxious thoughts began and their trigger. And best of all, you could do this personally, with a partner, or in a group based on your choice.
Also you can give someone an FMRI as a brain scanning polygraph today. But there are still a ton of questions about it’s legitimacy.
Thoughts are fleeting. 15 minutes could be filled with hundreds or thousands of distinct concepts. Not to mention active recording is different from passive observation.
I found it completely useless in my therapy seasons. These trains of thoughts are more like hallucination than real thoughts, because you think different at writing than during the day. I’m not even sure if keeping a diary makes you understand yourself better or just become more coherent with your delusions.
Automatic logs would be cool. It’s not only introspection itself that is hard but also that you have to remember to introspect and write down events for further analysis. Assuming you can trust the precision.
Every technology has its ups and downs. The same nitrates can grow plants or blow up a building. While blind optimism is bad, it’s depressing how negative discussion surrounding new technology has become. People have become literal luddites.
Reminds me of DARPA "Silent Talk" from 14 years ago. The objective was to "allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals"
> If you have a Neuralink, no problems, you can directly upload a trace of thoughts.
Except that someone with a jailbroken Neuralink could upload a filtered and arbitrarily-modified thought trace, getting ahead of all those plebs. Cyberpunk! :)
Hopefully in 2200 we’ll be long gone from all important fields in favor of new tech species who lack all the bs humans inherently bear on each other. If not, our own fault. Our species is a thin layer of culture on top of the “who most dominant ape here” game.
Yeah I can imagine law enforcement and employers are going to love this.
As much as this is an unimaginable positive benefit to people who are locked in, this is definitely one of those stories that makes me think "Stop inventing the Torment Nexus!"
> Yeah I can imagine law enforcement and employers are going to love this.
They will hate it, lies always benefits those with power more than those without since when the police lies against you then there isn't much you could do before, now you could demand they get their thoughts read.
I am happy though, that we finally may talk about what our unfiltered thoughts are, and how much we are expected to control them or curate them, and how to do so in a psychologically helpful way.
Imagine putting these on presidential candidates as they debate or when they try to explain a bill, it could massively improve democracy and ensure the people know what they actually vote for.
Maybe I'm missing something huge here but a blind controlled demo where the subject committed words to paper and then compared the results afterwards would be persuasive. Unfortunately, the demo as presented in the article seemed achievable by professional magicians and mentalists.
I'm sure we're close to brain interfaces, but something seems off about this one.
Let's say a couple years from now, someone invents an airport scanner that "detects" evil thoughts, except there is no way to verify it, and no accountability for false negatives. The result is whatever the operator says it is. If enough people accept it enough to not resist it, and even turn on the ones who are detected by it, it doesn't matter what's real because it's just a participatory ritual of sympathetic magic. I feel like there are examples of similar dynamics in recent memory.
I wonder if a-linguistic thought could work too. Maybe figure out what your dog is thinking or dreaming about, based on a dataset of signals associated with their everyday activities.
It seems like outputting a representation of embodied experience would be a difficult challenge to get right and interpret, though perhaps a dataset of signals associated with embodied experiences could more readily be robustly annotated with linguistic descriptions using a vision-to-language model, so that the canine mind reader could predict and output those linguistic descriptions instead.
Imagine knowing the specific park your dog wants to go to, or the subtle early signs of an illness or injury they're noticing, or what treat your dog wants you to buy.
Unfortunately, EEG doesn’t provide sufficient signal-to-noise ratio to support good communication speeds outside of the lab with Faraday cages and days/weeks of de-noising including removing eye-movement artifacts in the recordings. This is a physical limit due to attenuation of brain’s electrical fields outside of the skull, which is hard to overcome. For example, all commercial “mind-reading” toys are actually working based off head and eye muscle signals.
Implanted electrodes provide better signal but are many iterations away from becoming viable commercially. Signal degrades over months as the brain builds scar tissue around electrodes and the brain surgery is obviously pretty dangerous. Iteration cycles are very slow because of the need for government approval for testing in humans (for a good reason).
If I wanted to help a paralyzed friend, who could only move his/her eyes, I would definitely focus on the eye-tracking tech. It hands-down beat all BCIs I’ve heard of.
That is a neural lace, it informed her. ~ A more exquisite and economical method of torturing creatures such as yourself has yet to be invented.
For example going through a metal detector or handling a phone.
Unsurprisingly one of their biggest sources of noise is handling a plugged in phone.
I think something like an EEG faraday beanie would actually work and adding accessory egocentric video would allow doctors to filter a lot of the noise out.
Deleted Comment
https://actu.epfl.ch/news/thought-controlled-walking-again-a...
The ability of deep learning to tease back out signal from noise, such as reversing what's being typed from room audio with a keyboard in it, shouldn't be underestimated.
The biggest challenge may be that EEG data correlated with signals is relatively expensive to generate, so there's not going to be anything like millions of hours of people looking at or processing known things to throw into a model.
Whereas we're about to have massive increases in eye tracking data as it becomes a central component to new consumer hardware.
https://en.wikipedia.org/wiki/Signal-to-noise_ratio
https://en.wikipedia.org/wiki/Faraday_cage
Predict: was the University of California at Austin in where he studied in Beta Kappa in a degree of degree in history American Studies in 1975. and a one classes a half years to complete the degree. and was a excellent grades.
Wow. That seems comparable to the rudimentary _voice_ to text systems of the 70s and 80s. The brain interface is quickly leaving the realm of sci-fi and becoming a reality. I’m still not sure how I feel about it.
^ "Overall illustration of translate EEG waves into text through quantised encoding." doesn't count.
This year it is the "Brainoware" which is claimed to do speech recognition , and now this.
Deleted Comment
Come to think, you could even train the LLM with a corpus of the person's own transcribed conversations, if you've got it. Then it'd be serving almost exactly the function of predicting "what that person in particular would say at this point."
Maybe you could even find some additional EEG-pad locations that could let you read out the electrical consequences of AMPAR vs NMDAR agonism within the brain; determine from that how much the person is currently relying on their own internal top-down speech model vs using their own internal bottom-up processing to form a weird novel statement they've never thought before; and use this info to weight the level of influence the TTS model has vs the LLM on the output.
Sir, let us read that for you
https://www.youtube.com/watch?v=OSV7cxma6_s
>"Peter Diamandis, the futurist to watch as all of these technologies advance with unimaginable speed, is going to blow your mind and help you imagine new possibilities and opportunities for your healthspan."
That's the future.
Dead Comment
I wonder whether, in a decade or two, if the sensor technology has gotten good enough that they don't even need you to wear a cap, just there'll be people saying "obviously you don't have any reasonable expectation of not having your thoughts read in a public space, don't be ridiculous". What I mean is, we just tend to normalize surveillance technology, and I wonder if there's any practical limit to how far that can go.
Over a decade ago were the stories about how Target's loyalty program algorithm had discovered a teen was pregnant before she'd told her family, based on correlative purchase changes (like switching from scented to unscented candles).
If I could take your social media, face and eye tracking on CCTV, phone gyroscope data, purchase history, search history, and the same from all your associated contacts, with a broad enough comparative data set I could probably identify all kinds of skeletons from all kinds of closets.
It's a bit like the "my phone is listening to my conversations" freak out. It's not, but the thing you should be much more concerned about is that it has such an accurate picture of what you end up talking about without needing to be listening in the first place.
Being able to analyze your thought patterns outside your own head could lead to all sorts of improvements. You could find which teaching techniques are actually the most effective. You could objectively find when you are most and least focused. You could pinpoint when anxious thoughts began and their trigger. And best of all, you could do this personally, with a partner, or in a group based on your choice.
Also you can give someone an FMRI as a brain scanning polygraph today. But there are still a ton of questions about it’s legitimacy.
https://scholarship.law.columbia.edu/cgi/viewcontent.cgi?art...
Typing in a journal text file for 15 minutes every morning is already a thing... and it's free.
Yes.
It's kind of hard to think about the upside of no longer having private thoughts.
I'm amazed you can go right to its benefits without realizing the universe-sized hole in the ethics of this.
But then again, that's techbro nature.
https://www.engadget.com/2009-05-14-darpa-working-on-silent-...
I imagine it would help a stroke patient, I also imagine it would give out unfiltered thoughts, which might be troublesome.
You're right, this is why in year 2200, your job application is going to be fast-tracked by analyzing your thoughts directly.
If you have a Neuralink, no problems, you can directly upload a trace of thoughts.
In case you have wrong thoughts, don't worry, we have rehabilitation school, which can alter your state of mind.
Don't forget to be happy, it's forbidden to be sad.
Also, this is read-only for now, but what about writing ?
This could open new possibilities as well (real-life Matrix ?)
Oh by the way, did you hear about Lightspeed Briefs ?
==
All that being said, it's great research and going to be useful. Just the potential of abuse from politics is huge over the long-term.
Except that someone with a jailbroken Neuralink could upload a filtered and arbitrarily-modified thought trace, getting ahead of all those plebs. Cyberpunk! :)
As much as this is an unimaginable positive benefit to people who are locked in, this is definitely one of those stories that makes me think "Stop inventing the Torment Nexus!"
They will hate it, lies always benefits those with power more than those without since when the police lies against you then there isn't much you could do before, now you could demand they get their thoughts read.
not far off from existing issues like some forms of tourette’s.
I'm sure we're close to brain interfaces, but something seems off about this one.
Let's say a couple years from now, someone invents an airport scanner that "detects" evil thoughts, except there is no way to verify it, and no accountability for false negatives. The result is whatever the operator says it is. If enough people accept it enough to not resist it, and even turn on the ones who are detected by it, it doesn't matter what's real because it's just a participatory ritual of sympathetic magic. I feel like there are examples of similar dynamics in recent memory.
It seems like outputting a representation of embodied experience would be a difficult challenge to get right and interpret, though perhaps a dataset of signals associated with embodied experiences could more readily be robustly annotated with linguistic descriptions using a vision-to-language model, so that the canine mind reader could predict and output those linguistic descriptions instead.
Imagine knowing the specific park your dog wants to go to, or the subtle early signs of an illness or injury they're noticing, or what treat your dog wants you to buy.