Guest Post by Marc Coutanche, Yale University
From a young age, we learn the differences between a lemon and a lime and dozens of other fruit, making going to a farmer’s market to shop for fruit a seemingly simple task. But despite appearances, very little is simple about holding what you want in mind, and then identifying it in the world — whether that is a lime in the market or keys on a cluttered counter. It’s a testament to the evolution of the brain that it’s hard to even imagine object identification as anything other than effortless past childhood. But if you’ve known someone with Alzheimer’s disease, or certain other neurological disorders, its fallibility can become all too clear.
This is perhaps most strikingly apparent from observing patients who have developed “associative visual agnosia” after damage to the brain. The impairment can leave a person unable to identify (previously recognized) objects, despite having potentially perfect vision. These patients can even draw an object that’s placed in front of them, with little recognition of what it is; for example, they might draw a perfect replica of a carrot, with no idea that it’s a food. How does our brain store our knowledge of the thousands upon thousands of objects that we encounter in our lifetime, so that we can recognize them effortlessly from their features?
In a new research study, my coauthor, Sharon Thompson-Schill, and I found evidence that our knowledge of objects draws on a centralized hub in the brain. This hub pulls together dispersed pieces of information about an object’s particular shape, color, and so on, from sensory areas. Understanding these hubs, and how they integrate features, could prove critical to ultimately understanding cases where memory can fail, such as in Alzheimer’s disease.
In the past decade, new machine learning approaches that can ‘decode’ brain activity from fMRI scans have provided opportunities to tackle questions about the brain in new and exciting ways. The approach itself — seeing if brain activity patterns alone can be used to predict what someone is perceiving or thinking about — can sound like something from the pages of a sci-fi novel, but asking this question can tell us a lot about how the brain encodes information. The impressive success of decoding methods comes from their ability to pool together information from distributed populations of neurons.
Imagine activity in the brain as a symphony. Previously, fMRI methods have allowed us to listen to one instrument at a time, but the machine learning methods let us hear the whole orchestra; in this case, patterns of brain activity. Just as it’s easier to identify a musical piece when all the instruments are playing, we can now identify what the brain is processing with a lot more specificity than we previously thought possible.
In our recent study published in Cerebral Cortex, we investigated how knowledge is organized in the brain by having people visually search for fruits and vegetables. Previously, researchers have decoded memories for very distinct items, such as faces from vehicles. Decoding different fruits and vegetables is a lot more specific, and this category of objects has properties, such as systematic variations in shape and color, that are well suited to studying “semantic memory” — our knowledge of the objects we’ve encountered throughout our lives.
Some theories suggest that semantic memory has no central location — that it is distributed across the sensory and motor brain areas involved in seeing, hearing, touching, and manipulating objects. For example, your knowledge of a telephone would be spread across your auditory, visual, and motor cortices. Other theories suggest that one or more centralized hubs are important. One such idea is that our brain contains “convergence zones” that each integrates converging information from other brain areas. So your knowledge of limes might come from the successful integration of shape, color, and taste information at a convergence site. A key motivation for our study was to test for evidence of such a convergence zone, and for evidence of converging object properties.
In our experiment, we recorded participants’ brain activity with an fMRI scanner, while asking them to look for one of four fruits and vegetables — carrots, celery, limes, or tangerines. We wanted to probe memory, rather than current perception, so we couldn’t just show images of the fruits and vegetables. Instead, we asked participants to look for objects hidden within colorful visual noise (which looks like static on a screen).
In each trial, we first told our participants which fruit or vegetable to look for , and then showed them images of random visual noise. After some time, an object appeared, concealed inside the static. Importantly, we only looked at the brain activity recorded before the object appeared: while our participants were still looking at totally random noise. Focusing on the brain activity collected when they were holding an object in mind (without seeing it) let us truly probe internally-driven brain activity. We wanted to see if this activity would lead us to the location of a centralized hub.
Sure enough, we found that we could decode object identity in just one location: the left anterior temporal lobe, which lies a few inches above and to the front of the left ear. This finding is consistent with previous studies that point to the anterior temporal lobes as being important for semantic memory. For example, the conceptual errors made by dementia patients — including mistakes in naming fruits, and matching fruit names to pictures — is associated with deterioration in this brain region.
Interestingly, the memory-generated activity patterns that we found were very similar to activity patterns we observed when the participants were actually viewing images of each fruit or vegetable. To continue the musical analogy, we found a similar symphony when our participants were both seeing and thinking about the objects.
We next wanted to see which brain regions converge to ultimately make object identity in the anterior temporal lobe. For this, we turned to the visual processing areas responsible for shape and color. We had chosen our fruits and vegetables deliberately: two are green (lime and celery); two are orange (tangerine and carrot); two are elongated (carrot and celery); and two are near-spherical (lime and tangerine).
The idea was to “train” our machine learning decoder to distinctly look for brain activity patterns in regions associated with identifying shape and color, without picking up on activity associated with other distinguishing features, such as taste. We used the pairs of objects to our advantage here, by training the decoder to distinguish two of the fruits and vegetables (limes versus tangerines for color), and asking how it would classify other items with similar features (celery versus carrots). When using activity from a brain region associated with processing color, our decoder ‘mistook’ limes for celery, and tangerines for carrots. And the decoders that used data from the shape-processing area confused carrots with celery, and tangerines with limes. Those results made us confident that the decoders were correctly identifying color and shape information in the early visual regions.
We then reasoned that if shape and color really do converge on the left anterior temporal lobe, our object-decoders should find it easier to identity the searched-for object (e.g. tangerine) when both its color (orange) and shape (spherical) brain activity patterns are found in their respective regions. We found exactly this: a decoder could better identify an object from brain activity in the left anterior temporal lobe when both its color and shape were identified from converging feature regions.
The results of this study give support to theories that our brain contains one or more convergence zones that integrate object properties. This work is also the first to identify and link together the distinct brain patterns associated with both an object and its specific properties (color and shape). As part of the next steps in our research, we are now looking at how this knowledge becomes integrated in our brain during learning.
–
Marc Coutanche is a postdoctoral fellow at Yale University. He conducted this research with Sharon Thompson-Schill while at the University of Pennsylvania.
Are you a member of CNS with an interest in blogging? Consider contributing a guest post about your work or trends in the field. Email your ideas to CNS Public Information Officer, Lisa M.P. Munoz (cns.publicaffairs@gmail.com).