As I write this, I am in a crowded room surrounded by different voices, a blowing A/C, footsteps down the hall and the sound of typing from various laptops. How can I best focus on a single voice? Turns out the background noise matters: According to a new study, our brain uniquely processes different types of competing sounds and that shapes how individuals can pick out a voice in the crowd.
“What makes this different from most other work, is that typically researchers are interested in what happens to the target speech when listening against competing sounds,” says Samuel Evans of University College London. “Our approach was to turn this on its head and ask what happens to the distracting sounds themselves, not the target speech.”
To understand how different types of competing sounds affect the brain, Evans, with then-advisor Sophie Scott and colleagues designed a study in which participants listen to spoken newspaper stories while in an fMRI scanner. At the same time, the researchers played additional sounds that the participants had to ignore. The researchers used digital signal processing techniques to manipulate the properties of the competing sounds to make them more or less like speech. For example in one sound type, “rotated sounds,” researchers flipped speech frequencies such that high frequency components became low and vice versa.
“This allowed us to create a sound that has the acoustic properties of speech without being understandable,” Evans says. “By comparing brain responses to competing speech and these rotated sounds, we were able to identify brain regions that were sensitive to the meaningfulness of the competing sounds.”
As published in the Journal of Cognitive Neuroscience, competing speech is processed predominantly in the left hemisphere of the brain within the same pathway as target speech but is not treated the same way in that stream. They also found that individuals who perform better in noisy environments activate the left mid-posterior superior temporal gyrus more.
Evans spoke to CNS about these results and their significance for those of us struggling to listen while distracted, including older adults and those with learning or hearing impairments.
CNS: How did you become personally interested in this research area?
Evans: I originally trained as a speech and language pathologist and during that time it was noticeable that lots of my patients who had communication problems had more difficulty listening in noisy environments, than when communicating in relative quiet. This struck me as particularly problematic for children with language learning difficulties who often learn in noisy classrooms.
CNS: How do you define masking effects?
Evans: Masking effects are effects on the brain and on behaviour that arise from listening when there are multiple sounds in the auditory environment – which of course is most of the time, in the real world. The most common example given is when we pay attention to a person speaking at a noisy party. The phenomenon first received a lot of scientific attention back in the 1960s – which is why researchers often refer to it as the cocktail party effect – although these days it might be better called the “bar” or “night club” effect!
When we listen when there are multiple people talking we are faced with two issues. One, we need to identify and tune into the voice of the person that we are speaking to. This can be particularly difficult when the speakers’ voices are very similar. Second, because we process unattended speech to some degree, in case there is something important going on, we have to actively ignore what other people are saying. This is very hard if they are saying something interesting, like sharing a juicy piece of gossip.
However, it isn’t just a problem of competing voices, sometimes we listen against other kinds of sounds, like traffic or machinery noise. These kinds of sounds are clearly very different to speech, and so their effects are more associated with the degree to which they obscure the target speaker, rather than the difficulty in telling two or more voices apart.
CNS: What have we known previously about masking effects and speech perception?
Evans: We know, in the human brain, that listening to speech in quiet predominantly engages a processing stream that runs forward from auditory sensory regions along the length of the temporal lobes. Listening in noisy environments engages this anterior stream, but also strongly engages frontal and parietal brain regions that are involved in maintaining attention and making decisions. What we don’t know however is exactly what happens to the sounds that we are trying to ignore as they enter this forward running processing stream.
CNS: What were you most excited to find?
Evans: We were excited to find that there were regions of the brain that were sensitive to whether people could understand the competing sounds. This is consistent with our everyday experience of listening closely to a speaker in a noisy environment but being distracted by someone else saying something meaningful, for example someone saying your name. The most interesting part was that meaningful competing speech, engaged brain regions within the forward running processing stream engaged by listening to a single speaker in quiet, but did not reach brain regions that are engaged at later stages of processing, suggesting that we process competing speech within similar but slightly different brain systems to speech in quiet. Indeed it seemed like competing speech was to some extent held back from entering the later stages of neural processing, this seemed as if it could reflect the consequences of filtering out competing sounds.
We also found that regions of the right frontal lobe were particularly sensitive to the start of sounds. These regions are often damaged in people who suffer from lapses of attention. This early response to sounds might be important in signalling to the brain the need to be alert to new information in the auditory environment.
Finally, we identified a brain region that individuals who were good at listening in noise activated more.
CNS: How does this work fit in with related past work on listening in noise?
Evans: This work helps us to understand how healthy adults deal with speech in noisy environments. We anticipate that our findings will form the basis for a better understanding how these same systems are impaired in individuals who find listening in noise more difficult
, such as individuals with language learning impairments like dyslexia, older adults and those with hearing impairments.
CNS: What did you find is most different in people who effectively filter out background noise from those who find it more challenging?
Evans: We found that people who were better at listening to speech in noisy environments activated a region of the left temporal lobe, which was close to auditory sensory regions, more than those who did not. Understanding more about the function of this brain region may be important in understanding why some groups of individuals find listening in noise so challenging.
CNS: What do you most want people to understand about this work?
Evans: Our main take away message is that not all background noise is the same. Finding out how the different properties of competing sounds affect the brain offers an approach to understanding the nature of the underlying mechanisms involved in perceiving speech in background noise.
CNS: What’s next for this line of work?
Evans: In the future, we would like to extend this approach to understand how neural responses to masking sounds change across the age span (by testing younger and older individuals), and to groups of individuals who find listening in noise particularly difficult, for example, individuals with dyslexia and language impairment.
Follow Evans on Twitter, @SpeechAndBrains.
-Lisa M.P. Munoz