Reconstructing English words from neural signals

Using a brain-computer interface, a team of researchers has reconstructed English words from the brain activity of rhesus macaques that listened as the words were spoken.

Share

Using Brain-computer interface, scientists from Brown University have successfully reconstructed English words from neural signals recorded in the brains of nonhuman primates. Through this study, scientists wanted to determine if any particular decoding model algorithm performed better than others.

For the study, two pea-sized implants with 96-channel microelectrode arrays recorded the activity of neurons, while rhesus macaques listened to recordings of individual English words and macaque calls. In this case, the macaques heard reasonably simple one- or two-syllable words — “tree,” “good,” “north,” “cricket,” and “program.”

Scientists processed the neural recordings using computer algorithms explicitly created to perceive neural patterns related to specific words. From that point, the neural data could be translated back to computer-generated speech.

Scientists used several metrics to assess how closely the reconstructed speech matched the original spoken word that the macaque heard. The research showed the recorded neural data produced high-fidelity reconstructions that were clear to a human listener.

They also used multielectrode arrays to record such complex auditory information was a first.

Arto Nurmikko, professor in Brown’s School of Engineering, a research associate in Brown’s Carney Institute for Brain Science and senior author of the study, said, “Previously, work had gathered data from the secondary auditory cortex with single electrodes, but as far as we know this is the first multielectrode recording from this part of the brain. Essentially we have nearly 200 microscopic listening posts that can give us the richness and higher resolution of data which is required.”

During the study, scientists used recurrent neural networks (RNNs) — a type of machine learning algorithm often used in computerized language translation. RNNs produced the highest-fidelity reconstructions and substantially outperformed more traditional algorithms that are useful in decoding neural data from other parts of the brain.

Christopher Heelan, a research associate at Brown and co-lead author of the study, said, “The success of the RNNs comes from their flexibility, which is important in decoding complex auditory information.”

“More traditional algorithms used for neural decoding make strong assumptions about how the brain encodes information, and that limits the ability of those algorithms to model the neural data. Neural networks make weaker assumptions and have more parameters allowing them to learn complicated relationships between the neural data and the experimental task.”

Through this study, scientists expect that this kind of study could aid in developing neural implants the may assist in restoring peoples’ hearing.

Nurmikko said, “The aspirational scenario is that we develop systems that bypass much of the auditory apparatus and go directly into the brain. The same microelectrodes we used to record neural activity in this study may one day be used to deliver small amounts of electrical current in patterns that give people the perception of having heard specific sounds.”

The study is published in the journal Communications Biology.

Latest Updates

Trending