Early language exposure plays a critical role in shaping the young brain. Even babies can discriminate the sounds of various languages, using computational statistics to make sense of what they hear. Patricia Kuhl, co-director of the Institute for Brain & Learning Sciences at the University of Washington, has been pioneering these studies over the past 30 years.
In a recent study she co-authored in Acta Paediatrica, Kuhl and colleagues discovered that babies only hours old can differentiate sounds between their native language and a foreign language. They found that during the last 10 weeks in utero, babies listen to their mothers talking, learning before even born how to identify speech patterns of their mother’s native tongue.
The study involved 40 babies just over a day old in both Washington and Sweden. The babies listened to vowel sounds in their native languages and in foreign languages, and researchers recorded how long they sucked on a pacifier during each sound. The longer they sucked on the pacifier, the more they were thought to be interested in a particular sound. Both the American and Swedish babies sucked longer for the foreign language sounds than they did for their native ones, indicating early learning processes.
Kuhl will be delivering a keynote address at the upcoming 20th annual meeting of CNS in San Francisco (April 13-16), and she talked with us about that talk, some unique challenges of working with infants, how she got started in this line of work, and future directions moving forward.
CNS: What will you be talking about for your keynote speech at the CNS meeting?
Kuhl: I’ll discuss early learning in the domain of language, emphasizing three points: (1) how early measures of language processing at the level of the basic building blocks of speech – the consonants and vowels that make up words – predict future language skills; (2) how the “social brain” contributes to early language learning; and (3) how the tools of modern neuroscience – MEG [magnetoencephalography], MRI [magnetic resonance imaging] and DTI [diffusion tensor imaging] – will help us understand how babies crack the speech code.
CNS: What are some of the most exciting lines of research now (and into the future) on children and language development?
Kuhl: The most exciting new research will be looking at the baby brain during early language processing. Using MEG, we can see functional activity while the baby listens to speech and show 4-D images of a particular baby’s functional brain activation co-registered with that baby’s structural brain image showing 116 brain areas, as well as that baby’s fiber tracts imaged using DTI. It’s a first in the world and visually stunning. It will allow us to explore learning and to link brain and behavior in a much more dynamic way than ever before.
CNS: How do those research areas differ from where the field was 20 years ago?
Kuhl: Well, let’s start back when I was in grad school. To link brain to behavior then, I attended 7 am autopsy sessions on patients I had treated after a stroke; they had language impairments associated with aphasia. I would study my clinical assessments and therapy notes before autopsy to predict where the brain damage might have occurred in this patient, and then, during autopsy, I’d look at the 1 mm slices of this patient’s brain, guided by the main landmarks, in hopes of identifying the soft tissue and grey areas which indicated severe damage, to see whether they were where I’d expected them to be. The biggest challenge: holding up at 7 am under the suffocating and intense smell of the formaldehyde used to preserve the patient’s brain. I would never eat breakfast before these sessions!
CNS: Can you tell us some unique challenges of working with children/infants vs. adults? Any funny or surprising stories you would like to share?
Kuhl: Working with infants and young children is always a challenge, and mastering testing in the MEG machine, which looks like a hair dryer from Mars but is totally safe and totally noninvasive (and silent – yippee!), tops them all. Having 30 years of behavioral work helps of course. As for the brain measures that are noisy, such as the MRI, our most clever strategy has been is to send moms home with what we call the “MRI Lullaby,” which gets the children used to the clanking of the MRI machine. We test them at naptime, and once they’ve experienced the noise of an MRI at home, they sleep to its noisy rhythms during our 7-minute test.
CNS: How did you become interested in childhood learning?
Kuhl: I was always interested in medicine and thought about doing an MD/Ph.D. in neurology, with specific focus in the treatment of stroke victims who had language disorders known as aphasia. My experience on the neurology wards while at the University of Minnesota got me hooked forever, but my adviser died suddenly during the first year of my studies, and I had to change course because no one else specialized in language and the brain in that neurology department. My adviser on campus suggested focusing on the other end of the language learning continuum, with the youngest learners, babies. Very little work had been done. That decision changed my life!
CNS: How much of how children learn language is biologically “wired” versus influenced by social factors?
Kuhl: My current working hypothesis is that language learning depends on the social brain. Infants learn not only because they are computational wizards, but also because they are social beings, with a strong drive to communicate from other social beings. My goal is to see in the brain the interaction between brain areas responsible for computation, which are deeply forged in evolution across species, with brain areas that are phylogenetically more modern and enable social understanding. These mechanisms of social understanding may be at the heart of the interaction I’m talking about. Brain studies will test this hypothesis.
CNS: Do you speak any other languages? Do you encourage parents to expose children to multiple languages from a young age?
Kuhl: I definitely encourage parents to expose their young children to native speakers of another language in a social setting. In my own case, my parents spoke German to each other about a third of the time. But it was their private language, meant to communicate things that they didn’t want the 5 of us (me and my siblings) to hear. I have modest skills at decoding German, but I have no facility in speaking it, likely because I wasn’t spoken to and didn’t engage in social replies. It’s the only thing they did wrong – they were wonderful parents who fully engaged their children’s curiosity and led us all to believe we could do anything we set our sites on. They followed my career and read my research until they both passed.
CNS: What else are you working on now that we’ve not already covered?
Kuhl: Brain, Brain, Brain – especially the social foundations of learning!
—
Media contact: Lisa M.P. Munoz, CNS Public Information Officer, cns.publicaffairs@gmail.com
[…] […]