Neurosciences at KU Leuven have developed a technique that allows them to better identify the more accurate diagnosis of patients who cannot actively participate in a speech understanding test because they’re too young, for instance, or because they’re in a coma. The technique also holds the potential for the development of smart hearing devices.
A typical complaint from individuals with a portable amplifier is that they can hear discourse yet they can’t make out its importance. Without a doubt, having the capacity to hear discourse and really understanding what’s being said are two distinct things.
The tests to decide if you can hear delicate sounds are entrenched. Simply think about the test utilized by audiologists whereby you need to demonstrate whether you hear beep sounds. An elective alternative makes utilization of EEG, which is regularly used to test babies and whereby click sounds are exhibited through little tops over the ears. Terminals on the head at that point measure whether any brainwaves create in light of these sounds.
Co-author Jonathan Simon from the University of Maryland said, “The great advantage of EEG is that it is objective and that the person undergoing the test doesn’t have to do anything. This means that the test works regardless of the listener’s state of mind. We don’t want a test that would fail just because someone stopped paying attention.”
“Today, there’s only one way to test speech understanding. First, you hear a word or sentence. You then have to repeat it so that the audiologist can check whether you have understood it. This test obviously requires the patient’s active cooperation.”
Lead author Tom Francart from KU Leuven said, “And we’ve succeeded. Our technique uses 64 electrodes to measure someone’s brainwaves while they listen to a sentence. We combine all these measurements and filter out the irrelevant information. If you move your arm, for instance, that creates brainwaves as well. So we filter out the brainwaves that aren’t linked to the speech sound as much as possible. We compare the remaining signal with the original sentence. That doesn’t just tell us whether you’ve heard something but also whether you have understood it.”
The way this happens is very like looking at two sound records on your PC: when you open the sound documents, you now and then observe two figures with sound waves. Tom Francart: “Now, envision contrasting the first stable record of the sentence you’ve quite recently heard and an alternate sound document got from your brainwaves. In the event that there is adequate similitude between these two records, it implies that you have legitimately comprehended the message.”
This new system makes it conceivable to equitably and naturally decide if somebody comprehends what’s being said. This is especially helpful on account of patients who can’t react, incorporating patients in a state of unconsciousness.
Francart said, “The findings can also help to develop ‘smart’ hearing aids and cochlear implants. Existing devices only ensure that you can hear sounds. But with built-in recording electrodes, the device would be able to measure how well you have understood the message and whether it needs to adjust its settings – depending on the amount of background noise, for instance.”