Study Examines How Speech Is Processed in the Brain

November 28, 2022

Study sheds light on language processing and could lead to deeper understanding of disorders such as dyslexia and autism.

Rear view of a young boy wearing headphones
Research by Professor Hysell Oviedo provides new information about the neural mechanisms that underlie sound processing. (Credit: Getty Images)

The cerebrum is the largest part of the human brain, and it controls a myriad of higher functions, such as problem-solving, learning, and speech. It’s also where we process language and sound, a job that’s divided between the left and right hemispheres of the brain. 

“Essentially, your brain has this remarkable ability to simultaneously understand the lyrics of your favorite song and appreciate its musical qualities all at the same time,” said Professor Hysell Oviedo, (GC/City College, Biology, Neuroscience), who leads a new study that sheds light on the neural mechanisms that underlie sound processing in the brain.

The paper, published in Plos Biology, reports that auditory processing centers in the left and right hemispheres have significant differences in recurrent connectivity — when a group of neurons communicate in a looping circuit — leading to differences in how sounds are processed on each side.

The main goal of the study was to test whether differences in recurrent activity led to differences in memory span, the professor said. “Our study suggests that one of the critical differences between the two halves of the brain that allows them to process syntactic and melodic information simultaneously is differences in memory,” she said.

Learn More About the Ph.D. Program in Biology

“Brain cells in the left half of the brain appear to have shorter memory spans, which allow them to respond much faster to speech components, whereas cells in the right half of the brain have longer memory spans that allow them to respond to much slower fluctuations in speech and music, potentially,” Oviedo said.

And these differences in memory allow the two halves of the brain to seamlessly synchronize the processing of auditory information, she said.

Though the study experiments were carried out with mice, the results are nonetheless relevant to human neurolinguistics, said Oviedo, pointing to studies that, similarly, theorize the left hemisphere of the brain may have smaller windows of memory in order to process language syntax.  

“Just like humans, mice also preferentially process their own species’ vocalizations in the left hemisphere, and in the right hemisphere they seem to be processing melodic components of sound,” she said. “That's why we're using it as a model system to understand basic neuro processes, not of human language per se, but, in general, of how a particular species processes their own vocalizations.”

The findings may help to advance scientists’ understanding of language processing in the brain, said Oviedo, and how it’s disrupted in developmental disorders like dyslexia and autism.

The study authors include Demetrios Neophytou, a Graduate Center Biology Ph.D. student with the Oviedo Lab, and researchers from Stony Brook University and The City College of New York. The project was supported by funding from the National Institutes of Health and the National Science Foundation.

Published by the Office of Communications and Marketing