“In this scenario, if the wearer thinks ‘I need a glass of water,’ (or ‘can you help me with this?’) our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,”
Nima Mesgarani,, senior author for this new study and principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute. PhD

In a previous post, I shared the following to try to express the frustration we all feel when our child isn’t able to communicate with us
When I shared about artificial intelligence (AI) helping the verbal disabled children and adults in the future, I wasn’t thinking I would read about advancements a few weeks later!
The headline published January 29th, 2019 in the journal Nature and Columbia’s Zuckerman Institute shows we are moving forward! “Columbia Engineers Translate Brain Signals Directly into Speech; Advance marks critical step toward brain-computer interfaces that hold immense promise for those with limited or no ability to speak.”

From the one article “Decades of research has shown that when people speak — or even imagine speaking — telltale patterns of activity appear in their brain.
Early efforts to decode brain signals by Dr. Mesgarani and others focused on simple computer models that analyzed spectrograms, which are visual representations of sound frequencies which failed to produce anything resembling intelligible speech, So Dr. Mesgarani and his team turned instead to a vocoder, a computer algorithm that can synthesize speech after being trained on recordings of people talking which is the same technology used by Amazon Echo and Apple Siri to give verbal responses to our questions.
Speaking of Siri,
Researchers turned to epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while they measured patterns of brain activity, and these neural patterns trained the vocoder. Next, the researchers asked those same patients to listen to speakers reciting digits between 0 to 9 while recording brain signals that could then be run through the vocoder. That sound was analyzed and cleaned up by a type of artificial intelligence that mimics the structure of neurons in the biological brain.
The end result was a robotic-sounding voice reciting a sequence of numbers which was understood 75% of the time, which is “well above and beyond any previous attempts” according to the lead researcher. Can you understand it? I can make out the numbers, but not sure what the voice is saying prior to one.
A computer reconstruction based on brain activity recorded while a person listened to spoken digits. H. AKBARI ET AL., DOI.ORG/10.1101/350124
And the most exciting part and what all of us that care for a verbal disabled person want to hear: “Dr. Mesgarani and his team plan to test more complicated words and sentences next, and they want to run the same tests on brain signals emitted when a person speaks or imagines speaking. Ultimately, they hope their system could be part of an implant, similar to those worn by some epilepsy patients, that translates the wearer’s thoughts directly into words.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
References
- https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech
- https://www.nature.com/articles/s41598-018-37359-z
- https://pursuitofresearch.org/2019/01/04/artificial-intelligence-turns-brain-activity-into-speech
- https://zuckermaninstitute.columbia.edu/columbia-engineers-translate-brain-signals-directly-speech
- https://www.nature.com/articles/s41598-018-37359-z
LISA GENG
Author and Executive Director of The Cherab Foundation Lisa Geng got her start as a designer, patented inventor, and creator in the fashion, toy, and film industries, but after the early diagnosis of her young children, he entered the world of nonprofit, pilot studies, and advocacy. As the mother of two “late talkers,” she is the founder and president of the nonprofit CHERAB Foundation, co-author of the acclaimed book, The Late Talker, (St Martin’s Press 2003), and is the creator of IQed, a patented nutritional composition. Lisa has served as a parent advocate on an AAN board for vaccines and is a member of CUE through Cochrane US. Lisa is currently working on a second book, The Late Talker Grows Up and serves as a Late Talkers, Silent Voices executive producer. She lives on the Treasure Coast of Florida
Leave a Reply