The implications for AI thought to speech technology if developed fully is so exciting not just for adults who experienced an accident or head injury that affect speech, but for all of us who have children with a communication impairment.
When my son Tanner was younger, he would try to talk, and everyone would look to me as his translator. Tanner had the diagnosis of apraxia both oral as well as verbal, as well as dysarthria and a few soft signs as well. And even though I was really good at speaking Tanner (here is his communication book at 3) there were times I had no idea what he was saying either.
For example one day I was getting the boys ready for school, and Tanner said a new word. Bobo. New words are always a good thing but while getting ready for his special needs preschool he said Bo Bo Bobo and put his palms up while tipping his head to the side which indicated to me he not only was saying a new word but was asking for whatever Bobo was as a question. The problem was I had no idea what Bobo stood for.
In addition, back then I only had about 3 guesses before he would get so frustrated he’d end up in tears. So I did what I normally would do in this situation, looked around him for clues, thought of things that started with a B he could be asking for, and also at the same time as it was a school morning with limited time, I tried (in vain) to change the subject. It was a stressful morning for all of us and me not understanding what Tanner was asking drove his frustration to tears. I was almost afraid to mention Bobo again as I had no idea what it meant and didn’t want to upset him. It was literally months later that I figured out that Bobo stood for Dakota, his brother He was probably trying to say brother rather than attempting Dakota. Back then I didn’t have as much knowledge as I do today about helping him with motor planning.
So imagine how happy I was to see they are working with artificial
So here is where wishes meet science! S
Recently and just announced in 2019 in this Sciencemag article, three research teams made progress in turning data from electrodes surgically placed on the brain into computer-generated speech. Using computational models known as neural networks, they reconstructed words and sentences that were, in some cases, intelligible to human listeners.
While none of the efforts, described in papers in recent months on the preprint server bioRxiv, managed to re-create speech that people had merely imagined, the implications for this technology if developed fully is so exciting not just for adults who experienced an accident or head injury that affect speech, but for all of us who have children with a communication impairment. And appears we are getting closer to this being a reality!
The groups behind the new papers made the most of precious data by feeding the information into neural networks, which process complex patterns by passing information through layers of computational “nodes.” The networks learn by adjusting connections between nodes. In the experiments, networks were exposed to recordings of speech that a person produced or heard and data on simultaneous brain activity.
One of the teams, Mesgarani’s, relied on data from five people with epilepsy. Their network analyzed recordings from the auditory cortex (which is active during both speech and listening) as those patients heard recordings of stories and people naming digits from zero to nine. The computer then reconstructed spoken numbers from neural data alone; when the computer “spoke” the numbers, a group of listeners named them with 75% accuracy.
Another team, led by neuroscientists Miguel Angrick of the University of Bremen in Germany and Christian Herff at Maastricht University in the Netherlands, relied on data from six people undergoing brain tumor surgery. A microphone captured their voices as they read single-syllable words aloud. Meanwhile, electrodes recorded from the brain’s speech planning areas and motor areas, which send commands to the vocal tract to articulate words. The network mapped electrode readouts to the audio recordings, and then reconstructed words from previously unseen brain data. According to a computerized scoring system, about 40% of the computer-generated words were understandable.
Finally, neurosurgeon Edward Chang and his team at the University of California, San Francisco, reconstructed entire sentences from brain activity captured from speech and motor areas while three epilepsy patients read aloud. In an online test, 166 people heard one of the sentences and had to select it from among 10 written choices. Some sentences were correctly identified more than 80% of the time. The researchers also pushed the model further: They used it to re-create sentences from data recorded while people silently mouthed words. That’s an important result, Herff says—”one step closer to the speech prosthesis that we all have in mind.”
Again I know this technology is exciting but still in its infancy, but next time you joke, “I wish there was a way I could look in his brain and know what he is trying to say!” know there are people working on that.
Research articles:
https://www.biorxiv.org/content/early/2018/10/10/350124
https://www.biorxiv.org/content/early/2018/11/27/478644
https://www.biorxiv.org/content/early/2018/11/29/481267
LISA GENG
Lisa Geng is an accomplished author, mother, founder, and president of the CHERAB Foundation. She is a patented inventor and creator in the fashion, toy, and film industries. After the early diagnosis of her two young children with severe apraxia, hypotonia, sensory processing disorder, ADHD, and CAPD, she dedicated her life to nonprofit work and pilot studies. Lisa is the co-author of the highly acclaimed book “The Late Talker” (St Martin’s Press 2003). She has hosted numerous conferences, including one overseen by a medical director from the NIH for her protocol using fish oils as a therapeutic intervention. Lisa currently holds four patents and patents pending on a nutritional composition. She is a co-author of a study that used her proprietary nutritional composition published in a National Institute of Health-based, peer-reviewed medical journal.
Additionally, Lisa has been serving as an AAN Immunization Panel parent advocate since 2015 and is a member of CUE through Cochrane US. Currently working on her second book, “The Late Talker Grows Up,” she also serves as an executive producer of “Late Talkers Silent Voices.” Lisa Geng lives on the Treasure Coast of Florida.