This is How our Brain Encodes Sounds

A new interpretation of an old observation.

Share

When you hear any kind of sound, your brain needs to process quickly to identify its type and from where it is coming.  In a new study, a scientist at Washington University in St. Louis has a new interpretation for an old observation.

Dennis Barbour, MD, Ph.D., suggested, auditory cortex neurons play an essential role in encoding sounds differently than previously thought. Sensory neurons in this auditory cortex respond relatively indiscriminately at the beginning of a new stimulus but rapidly become much more selective.

A couple of neurons reacting for the term of a boost were, for the most part, thought to encode the identity of a stimulus, while the numerous neurons reacting toward the start were thought to encode just its quality.

This is for the first time, scientist predicts that the indiscriminate, initial responses would encode stimulus identity-less accurately than how the selective responses register for the sound’s duration.

Barbour said, “At the beginning of a sound transition, things are diffusely encoded across the neuron population, but sound identity turns out to be more accurately encoded.”

“As a result, you can more rapidly identify sounds and act on that information. If you get about the same amount of information for each action potential spike of neural activity, as we found, then the more spikes you can put toward a problem, the faster you can decide what to do. Neural populations spike most and encode most accurately at the beginning of stimuli.”

Barbour primarily recorded individual neurons by using noninvasive techniques to make similar kinds of measurements of brain activity. Event-related potential (ERP) techniques record brain signals through electrodes on the scalp and reflect neural activity synchronized to the onset of a stimulus.

On the other hand, Functional MRI (fMRI) reflects action arrived at the midpoint of more than a few seconds. On the off chance that the cerebrum was utilizing on a very basic level distinctive encoding plans for onsets versus supported jolt nearness, these two strategies may be relied upon to separate in their discoveries. Both reveal the neural encoding of stimulus identity.

Borbour said, “There has been a lot of debate for a very long time, but especially in the past couple of decades, about whether information representation in the brain is distributed or local.”

“If the function is localized, with small numbers of neurons bunched together doing similar things, that’s consistent with sparse coding, high selectivity, and low population spiking rates. But if you have distributed activity or lots of neurons contributing all over the place, that’s consistent with dense coding, low selectivity, and high population spiking rates. Depending on how the experiment is conducted, neuroscientists see both. Our evidence suggests that it might just be both, depending on which data you look at and how you analyze it.”

“The research is the most fundamental work to build a theory for how information might be encoded for sound processing, yet it implies a novel sensory encoding principle potentially applicable to other sensory systems, such as how smells are processed and encoded.”

Latest Updates

Trending