“In this scenario, if the wearer thinks ‘I need a glass of water,’ (or ‘can you help me with this?’) our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,”
Nima Mesgarani,, senior author for this new study and principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute. PhD
In a previous post, I shared the following to try to express the frustration we all feel when our child isn’t able to communicate with us
When I shared about artificial intelligence (AI) helping the verbal disabled children and adults in the future, I wasn’t thinking I would read about advancements a few weeks later!
The headline published January 29th, 2019 in the journal Nature and Columbia’s Zuckerman Institute shows we are moving forward! “Columbia Engineers Translate Brain Signals Directly into Speech; Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak.”
From the one article “Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain.
Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies which failed to produce anything resembling intelligible speech, So Dr. Mesgarani and his team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking which is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions.
Speaking of Siri,
Researchers turned to epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while they measured patterns of brain activity, and these neural patterns trained the vocoder. Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9 while recording brain signals that could then be run through the vocoder. That sound was analyzed and cleaned up by a type of artificial intelligence that mimics the structure of neurons in the biological brain.
The end result was a robotic-sounding voice reciting a sequence of numbers which was understood 75% of the time, which is “well above and beyond any previous attempts” according to the lead researcher. Can you understand it? I can make out the numbers, but not sure what the voice is saying prior to one.
And the most exciting part and what all of us that care for a verbal disabled person want to hear: “Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
References
- https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech
- https://www.nature.com/articles/s41598-018-37359-z
- https://pursuitofresearch.org/2019/01/04/artificial-intelligence-turns-brain-activity-into-speech
- https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech
- https://www.nature.com/articles/s41598-018-37359-z
LISA GENG
Lisa Geng is an accomplished author, mother, founder, and president of the CHERAB Foundation. She is a patented inventor and creator in the fashion, toy, and film industries. After the early diagnosis of her two young children with severe apraxia, hypotonia, sensory processing disorder, ADHD, and CAPD, she dedicated her life to nonprofit work and pilot studies. Lisa is the co-author of the highly acclaimed book “The Late Talker” (St Martin’s Press 2003). She has hosted numerous conferences, including one overseen by a medical director from the NIH for her protocol using fish oils as a therapeutic intervention. Lisa currently holds four patents and patents pending on a nutritional composition. She is a co-author of a study that used her proprietary nutritional composition published in a National Institute of Health-based, peer-reviewed medical journal.
Additionally, Lisa has been serving as an AAN Immunization Panel parent advocate since 2015 and is a member of CUE through Cochrane US. Currently working on her second book, “The Late Talker Grows Up,” she also serves as an executive producer of “Late Talkers Silent Voices.” Lisa Geng lives on the Treasure Coast of Florida.