When we communicate with others, we are constantly monitoring our speech and theirs — taking in multiple external cues — to best engage in meaningful conversation. Despite the multidimensional aspects of speech monitoring, most studies on the topic to date have focused on how we produce a string of accurately sequenced sound units rather than how we actively work to control our speech.
In a new study with senior authors Katie L. McMahon and Greig I. de Zubicaray of Queensland University of Technology in Australia, researchers sought to understand what happens in the brain while we monitor our speech. “We were seeking to clarify the neural mechanisms underlying speech monitoring and inhibition, which might help us better understand neurological disorders characterized by impairments of these processes, such as Tourette’s syndrome and stuttering,” says lead author and Ph.D. student Samuel Hansen at the University of Queensland.
As published in the Journal of Cognitive Neuroscience, the researchers used a modified stop signal task during fMRI to identify the areas of the brain engaged when attempting to halt speech — finding different patterns of brain activity for successful versus unsuccessful speech halting. CNS spoke with Hansen to learn more about the study design, implications of the findings, and next directions for the research.
CNS: How did you become personally interested in this research area?
Hansen: Language, for me, has always been the most uniquely defining feature of the human condition. Speaking seems at once so familiar and natural, yet the ease with which we translate thought into sound belies its underlying complexity. Interrupting a conversational partner — to signal a misunderstanding, to correct an inaccuracy, or to chime in with a clarification — is quite common in everyday speaking. It is important that we integrate production and perception fields of language research and study them in concert rather than in isolation.
CNS: Can you highlight any novel aspects of your study design?
Hansen: The majority of research on the neural mechanisms of speech errors has involved people with aphasia, with few lab paradigms able to generate large numbers of errors in healthy participants. To our knowledge, no-one has ever designed an fMRI study of a modified stop signal task involving picture naming and the use of words as stop signals to generate 50% errors. This design allowed us to identify the neural mechanisms of successful versus unsuccessful speech inhibition for the very first time, i.e., we were able to show what brain regions are engaged when we commit a production error.
CNS: Why did you choose to present stop signals phonologically similar to the target picture name? Can you describe how that worked?
Hansen: The dominant account of speech monitoring is Willem Levelt’s classic “perceptual loop theory”. It proposes monitoring of speech production is accomplished via two loops that feed into the speech perception/comprehension system. The outer loop uses overt speech as input (akin to hearing others speak). The inner speech loop is proposed to use internally generated and phonologically specified/encoded representations as input into the speech perception system.
We reasoned this inner loop should therefore be sensitive to phonologically similar versus dissimilar words presented as stop signals during naming. If the stop signal shared the same initial phoneme with the target picture name — e.g., hearing “cabbage” while naming “camel” — we expected the perception system would have to process the final phoneme before detecting the discrepancy between inner and overt speech and so take longer to halt production. Our results failed to confirm this prediction, raising questions about the inner loop account or at the very least the assumption that it operates at a level of phonological representation.
CNS: What were you most excited to find?
Hansen: We were very excited to find evidence that halting production engaged both language-specific and domain-general neural mechanisms. In addition, some of these regions were known to be involved in stuttering, providing some nice converging evidence. We were a bit surprised that we were unable to find any behavioural or neural evidence for speech monitoring occurring at a level of processing phonological representations, but this is consistent with other recent findings that have questioned the perceptual loop account.
CNS: What’s next for this line of work?
Hansen: Understanding monitoring and control of speaking is important but the next stage involves studying the neural mechanisms of speech repair processes. Once speech is interrupted, how does the speaker determine what to say next? That will be an exciting direction for research.
CNS: Anything I didn’t ask you about that you’d like to add?
Hansen: I would like to acknowledge the wonderful people I work alongside that encourage and inspire me. I have been lucky to be part of a vibrant team of researchers that have fostered and fueled my academic curiosity and enthusiasm.
-Lisa M.P. Munoz
Leave a Reply
You must be logged in to post a comment.