A recent study conducted by researchers at the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University advances brain-computer interface to reveal a possible solution for those with limited or no ability to speak.
The study stands on the shoulders of the exhaustive research that has gone into revealing that speaking or even the thought of speaking produces a specific neural response, as does the act or thought of listening. The logical next step then was to decode these brainwave patterns in order to open up the doorway to the possibility of communicating directly via brain signals.
The first way senior author and primary investigator at the Zuckerman Institute Dr. Nima Mesgarani went about the project was to focus on “simple computer models that analyzed spectrograms, which are visual representations of sound frequencies,” according to a statement about the study.
However, this method failed to hold water because it could not produce intelligible speech.
Mesgarani and his team then turned to a vocoder, which is “a computer algorithm that can synthesize speech after being trained on recordings of people talking.” This is the same technology that Amazon Echo and Apple’s Siri use to give verbal responses.
Once the process was determined, the next step was to teach the vocoder to interpret brain
patterns.
In order to do so, Mesgarani reached out to Dr. Ashesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute, an institute that specializes in treating epileptic patients who undergo regular brain surgeries.
Epileptic patients are great test subjects for the progression of neuroscience because of these regular brain surgeries.
Since the brain is already exposed during the surgery, scientists can perform many tests that, while neither harming nor helping the epileptic patient, can provide great benefit to the scientific
community.
“Working with Dr. Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity,” Mesgarani said. “These neural patterns trained the vocoder.” The same patients were then used to similarly decode the auditory part of conversation.
To then test out whether the vocoder had done its job accurately, individuals were asked to report what they heard when the recording was played back.
“We found that people could understand and repeat the sounds about 75% of the time, which is well above and beyond any previous attempts,” Mesgarani continued. “The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy.”
The future of brain-to-speech translation seems to hinge on further advancing this technology by feeding it more words in order to increase the accuracy of the vocoder. More complex sentences and words would also make for a better experience.
An all-too-common problem this new technology seeks to fix is the loss of speech induced by such medical issues as amyotrophic lateral sclerosis — also known as ALS — and recovery from a stroke. In patients with ALS, the skeletal muscles of the body begin to
lose function due to the
deterioration of the motor neurons that connect the brain’s orders to the body.
Only two specific muscle types don’t shut down in response to ALS, those in the eyes and in the anus. Using the new technology could therefore circumvent the shutdown of the motor neurons of the mouth and tongue.
The late Stephen Hawking suffered from ALS but was able to communicate his complex ideas to millions with the use of an assistive technology that would track the movement of his cheek, one of the few muscles he was able to control in his later life.
This was an expensive technology that was sponsored and provided by Intel specifically for him because of his fame and his contributions to the world.
There is promise for this new method to not only be faster, but to be much more widely accessible. Though there would have been some delay using the cheek method that Hawking used, the new method currently underway revolves around using the brain directly, streamlining the process of translation.
Additionally, not every one of the estimated 450,000 people with ALS worldwide are going to be sponsored by Intel for their contributions to the world.
It could be possible to use this new method to help them as well and to re-introduce what was once thought to be lost to the world — the voice of an ALS patient.
Ricky Ankley • Dec 19, 2021 at 3:02 am
i was diagnosed of Bulbar ALS. After years on medications, symptoms worsened with tremors on my right hand, Fortunately last year, I learnt about Kycuyu Health Clinic (ww w. kycuyuhealthclinic. c om) and their effective ALS Formula treatment through an ALS support group on facebook the Lou Gehrig’s disease treatment made a great difference, most of my symptoms including balance, weakness, falling alot and others gradually disappeared. I improved greatly over the 4 months treatment, its been a year since the treatment, i have no symptoms. I have a very good quality of life and a great family!