Breakthroughs in cognitive neuroscience: Highlighting influential research from the past 20 years
This series explores influential papers in cognitive neuroscience, as measured by the number of times they are cited each year. The papers featured are a sampling of many important works in the field over the past 20 years. This is the fourth in the series. Read the first, second, and third stories.
The ability to view neural responses through fMRI has transformed neuroscience over the last two decades. When the technology’s use began, scientists used it to identify the brain regions most activated when people were exposed to various stimuli. While that use continues, another fMRI method has emerged — looking not just at neural activity locations but their patterns as well. The first demonstration of this method, known as multivariate pattern analysis (MVPA), was in 2001, when scientists used the technique to study how people categorize faces and objects.
“That paper made two major contributions,” says James Haxby of Dartmouth, lead author of the 2001 paper in Science. “First, it showed that fMRI responses can be analyzed as patterns of activity, and these patterns can be decoded to identify the stimulus or cognitive operation that produced the response.” The second major contribution, he explains, was the discovery that the brain’s visual cortex does not have specialized patches that process only one category of objects but rather that it distributes representations of faces, animals, and objects across regions in distinct patterns.
When Haxby’s team first began the project, they were most interested in understanding how the human brain categorizes faces, animals, and objects. Previous research had identified, for example, that the fusiform gyrus region of the brain became more activated when recognizing faces as opposed to objects. Haxby and his colleagues wondered if the fusiform region was specialized for face processing and nothing else, or if it was part of a larger pattern.
“The proposal that every possible face, animal, and object category has a specialized region or set of neurons dedicated to its representation didn’t seem possible,” wrote Haxby in a 2012 perspective piece in Neuroimage. “There are too many ways that faces and objects can be categorized.” It seemed more probable to his research team that both strong and weak responses play an important role in face and object recognition and comprise a pattern of activity.”
Their study looked at neural responses to eight categories of things — human faces, cats, chairs, shoes, scissors, bottles, houses, and phase-scrambled images. They found a distinct pattern of response for each category that was unique even when excluding the regions most activated from the analysis. For example, they found a distinct pattern of activity for facial response independent of the strong activation in the fusiform gyrus. The paper showed that representations of faces and objects in ventral temporal cortex overlap and are widely distributed.
While the paper informed big changes in how scientists think about visual recognition, its even more lasting legacy has been in the use of MVPA. Since its publication, subsequent work in many laboratories have shown that MVPA can decode low-level visual features and auditory stimuli, as well as more abstract cognitive states, such as intentions. “It has led to the development of methods, such as representational similarity analysis, that give deeper insights into how information is encoded in patterns, rather than just the information that can be distinguished,” Haxby says.
“MVPA, as used in this first paper and by essentially all subsequent work, builds a new model for the representational space in each subject, meaning that a subject’s brain activity is decoded based on an individually-tailored model that is valid only for that subject,” he says. Haxby and colleagues are now developing methods for building a model of neural responses that can be applied across individual brains and across a wide range of stimuli or cognitive states. They developed a new algorithm that aligns data across subjects into a common, high-dimensional “representational space” that enables them to decode an individual subject’s brain activity based on other subjects’ responses.
The work, Haxby says, lays the foundation for a type of functional brain atlas in which scientists can compare brain responses across subjects, experiments, and research centers. A future area of study is to examine how the representational spaces vary across subjects, due to differences in experience, genetics, and clinical disorders. “This may lead to deeper understanding of how development, learning, and disorders affect brain organization.”
-Lisa M.P. Munoz
Media contact: Lisa M.P. Munoz, CNS Public Information Officer, cns.publicaffairs@gmail.com