Google is training its AI for people with voice impairments

Google’s Project Euphonia helps make speech tech more accessible to people suffering speech impairments.

Share

Voice assistants are the ultimate luxury for all of us as they can perform a variety of actions after hearing a wake word or command. These are developing quickly, changing our lives, and making things easier. However, millions of people are suffering speech impairments caused by neurological conditions such as stroke, ALS, multiple sclerosis, traumatic brain injuries, etc. For those people, voice assistants can be a frustrating and challenging thing.

In order to change it, Google has revealed their project named Euphonia, through which they are putting efforts under its AI for Social Good program that utilizes artificial intelligence to improve speech recognition technology. In other words, Google is training its AI to better understand diverse speech patterns, like impaired speech.

For the project, Google teamed up with the ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI). For improving its AI technology, they are using voice recordings of people who have the neuro-degenerative condition ALS.

ALS is a neurodegenerative condition that can cause dysarthria- the term used to describe slow, effortful, slurred speech, and breathy or hoarse voice. Weakening lung muscles affect speech as well. The Google team partnered with these groups to deeply learn about the communication need of the people suffering speech impairment and change its AI algorithms, resulting the mobile phones and computers can more reliably transcribe words spoken by people with these kinds of speech difficulties. Just like the friends and family of the people with ALS understand them, now the computer or mobile phones will also understand.

To do the same, Google recorded thousands of voice samples. Also, with the help of Dimitri Kanevsky, a speech researcher at Google who learned English after becoming deaf, they recorded around 15000 phrases. Those phrases were then converted into spectrograms- visual representations of sound. And the sound used to train the AI system to better understand or recognize such less common type of speech.

The project is still under progress, as its AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS. But the team is also working and hoping that the research can be applied to larger groups of people and to different speech impairments.

Google also wants its AI to algorithms to detect sounds or gestures, and translate them into actions. They believe that this may be helpful to people who are severely disabled and cannot speak. In short, it wants to develop AI that can understand anyone, regardless of how they communicate.

Trending