For parents of deaf children, deciding whether to get a cochlear implant can be tough. The great hope is that an implant will help deaf children gain oral language skills. Behavioral data has suggested that congenitally deaf children best receive implants by age 4, but little has been known about how they perceive their first sounds from an implant. A new study suggests that after only 4 months with an implant, brain signals can distinguish basic linguistic features as proficiently as in hearing children – showing very early and fast plasticity in the brain that leads to language development.
The study, led by Niki Vavatzanidis of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig and Technische Universität Dresden, looked specifically at how congenitally deaf children with cochlear implants perceived long and short vowel variations directly after implantation. They measured EEG (electroencephalogram) signals to track over the course of 8 months how quickly the children could differentiate these vowel lengths, which are among the most basic linguistic features.
As published in the Journal of Cognitive Neuroscience this month, after only 2 months of auditory input from the implants, children would differentiate between long and short syllables and after only 4 months, their EEG signal properties reached the same properties of the normal hearing control group. For parents and therapists, this means that children develop basic auditory skills from very early on.
Vavatzanidis spoke with CNS about how she became interested in this research area, differences in vowel length among languages, how the new study fits in with past work on language acquisition in the deaf, and what it was like working with 2-year-olds.
CNS: How did you become personally interested in language acquisition and in particular in the hearing impaired?
Vavatzanidis: I’ve grown up with several languages, which is certainly one of the reasons why language has always fascinated me, but I must admit that I was quite naïve about hearing impairments until my Ph.D. project. I was, however, instantly intrigued when I heard that the Max Planck Institute for Human Cognitive and Brain Sciences and the Saxonian Cochlear Implant Center planned to collaborate in establishing a lab for studying aspects of hearing with a cochlear implant and that there would be a chance to study the development of young children. There you have it all at once: developmental aspects of language, the plasticity of a young brain encountering a new modality, and a bridge between basic research and clinical outcomes. Learning about hearing impairments was a fascinating side effect that I very much appreciate.
CNS: What have we known previously about language acquisition in people with cochlear implants?
Vavatzanidis: Up to now, almost all child studies on implants have been on the behavioral level. They show that a large proportion of children with an implant are able to develop a good level of oral communication that allows them to attend regular schools. Whether they succeed in this depends on several factors, one of them being the age at implantation. The longer a congenitally deaf child remains without input, the further the neural structures of the auditory cortex can develop in an unfavorable manner and the larger the gap that the child has to overcome to reach the level of her age peers. While it is still argued whether there is a difference between implanting the child in its first or second year, there is general agreement that outcomes are poor if the implantation takes place after the age of 4.
Another factor affecting the outcome is whether the child is in fact congenitally deaf or whether he or she had any auditory input at all. Even small amounts of hearing experience are of great advantage for the auditory development and thus for later language development.
CNS: What was the major goal of this study?
Vavatzanidis: The principal aim was to understand what kind of acoustical language relevant features are available to children via the implant. A key motivation for providing young children with an implant is the hope that they will catch up in the acquisition of oral language and that they will thus be able to communicate in their otherwise hearing environment (as many come from hearing families).
Yet we hardly know what kind of auditory features are relevant for the language acquisition these children are able to perceive. Their access to auditory input has been delayed substantially and the input provided by the implant is diminished, for example in spectral features compared to “normal” hearing. We also do not know how this perception develops over time. In other words, we wanted to know: What tools do these children have to extract what they need from their language environment in order to develop language themselves? How long do they need to adapt to this new sensory modality in order to develop a sensitivity towards language relevant features?
All my studies involve children implanted before the age of 4, i.e. when they are in their language sensitive phase and still have a good prognosis for language acquisition. In the present study, we focused on the “simple” feature of vowel length – the ability to differentiate between syllables with a long or short vowel. If you cannot perceive the difference, you will have difficulties telling words like “feel” or “fill” apart. This may sound less dramatic in English but it will have severe consequences, for example, in German where this would lead to the confusion of pronouns and prepositions like “ihn” vs. “in” (“him” vs. “in”) and other commonly used words. In addition, in several languages, the lengthening of a vowel marks syllable stress and the perception of stress patterns in turn is aiding children in the segmentation of the speech stream into single words.
CNS: What were the novel elements to your study? And would you please explain the vowel length paradigm?
Vavatzanidis: Previous studies have mainly focused on older children with several years of experience with the implant and that often were very heterogeneous with respect to age, their age at implantation, and their history of hearing. For this study, we wanted a consistent group of congenitally deaf children, all implanted at a young age whose development in distinguishing linguistically relevant features we could follow in bimonthly intervals during their first 8 months of hearing experience with the implant.
The paradigm itself is a standard oddball paradigm where in one block a series of syllables with a short vowel (/ba/) is interrupted occasionally by a syllable with a long vowel (/ba:/) (or write /baa/ instead). In a second block we reverse the order, i.e. the short syllable is interrupting the series of long syllables. While the children listen to these syllables, we record the respective EEG data that should tell us whether they perceive the acoustic deviation of the interrupting syllable.
CNS: What were you most excited to find?
Vavatzanidis: We were amazed to find such a clear differentiation of vowel length after only 2 months of implant use, even though we knew that length is among the easier acoustic features that adult cochlear implant users are able to differentiate. In our study, however, we focused in particular on congenitally deaf children to whom sound was an altogether new sensory modality. Seeing the ERPs of the congenitally deaf children reach the level of normal hearing peers after 4 months of implant use was truly astounding.
CNS: What was it like working with such young children?
Vavatzanidis: Trying to get reasonable EEG data from lively 2-year olds is a challenge, to be sure! This might explain the surprising lack of cochlear implant studies so far in this interesting age group. But it is certainly rewarding as well, not only because you suddenly develop puppeteering and other entertainment skills to keep the children happy. Many children were shy at their first session, but by the third of fourth session, some had already developed into small EEG lab technicians, eagerly putting on the cap and wanting to perform the next steps themselves. Certainly watching the individual progress of so many children is fascinating and being an early bilingual myself, I was particularly thrilled seeing the parallel development of sign and oral language in those children with signing parents.
CNS: How does this work fit in with, or differ from, past work on this topic?
Vavatzanidis: Apparently frequency discrimination is also developed early, i.e. between the first and the third month of implant use. It thus seems that at least to early implanted children, basic acoustic features are available without great delay after implantation, which increases their chance to catch up with normal hearing peers. This is corroborated by studies from Anu Sharma that show that the early EEG components of children implanted at a young age can reach the maturational level of typically developing children, whereas the early EEG components of late implanted children (e.g. at age 7) never reach full maturation.
CNS: What’s next for your work?
Vavatzanidis: We are about to complete a second study with a similar design focusing on stress pattern and a third study focusing on the level of word acquisition. The ultimate goal would be to understand the auditory world of children with a cochlear implant such that we may be able to give better and more specific support where it may be needed during their language development. In doing so, we also expect to learn a lot about the plasticity of a child’s brain adapting to a new or lost modality.
-Lisa M.P. Munoz