CNS 2014 Blog
Does a camera have consciousness? Well, of course not. But the camera serves as a good starting point for understanding some key features of the human visual system that enable us to have conscious awareness of what we see, as explained Victor Lamme of the University of Amsterdam in a symposium on consciousness at the CNS meeting in Boston on Monday.
A photo camera “not only stores the images that you take but can also categorize part of that image as being a face, it can recognize that face, put a name tag below it, and even judge the emotional expression of that face so that it only takes a photograph when everybody smiles,” Lamme said. How the camera works is very similar to how the human visual system works, he said – with pixels in the CCD chip sampling the image like photoreceptor in the primary visual cortex, and with both having systems for facial and expression recognition.
“But despite all of that, we generally do not assume that photo camera have conscious sensations of the photos they take,” Lamme said. Why not? What defines human visual consciousness?
As demonstrated in several studies, the human system is marked by interactions that integrate information between the visual photoreceptors and the systems for facial and emotional expression. In a 2012 study, for example, Lamme and colleagues tested some of these interactions using both fMRI and EEG data.
They presented participants with faces, houses, and objects, some of which we visible and some of which were made invisible. using a technique called “dichoptic masking.” The researchers arranged lines with different orientations in two patterns, one to make a face for the left eye and one for the right eye. When the two images came together, they either made a visible face or a homogenous pattern that masked the faces, rendering them invisible. They wanted to see the difference in how the brain processed the visible as compared to the invisible.
They found that the same brain region responsible for facial recognition (fusiform face area) activates both when seeing the visible and invisible faces. The difference between seeing visible versus invisible was in the EEG data, where the responses was much shorter lived when viewing invisible faces than the visible. The fMRI data also indicated much stronger and more elaborate interactions between the fusiform face area and other visual areas.
“So there is a clear difference in the way faces that you consciously see are processed compared to faces that you do not consciously see,” Lame said. Only in the case of those you consciously see do you have interactions between the systems. Going back to the camera analogy, the pixels and the modules all work independently. Integration is necessary for consciousness.
Other studies have worked to pin down the exact interactions that define conscious vision. And work by Lamme and others with non-human primates has shown that these interactions are absent during anesthesia or in other conditions that interrupt communication between the systems.
He also discussed newer work that used the anesthetic ketamine to disrupt certain receptor (NMDA) pathways responsible for integrating visual information. These same pathways are connected to learning – another factor that distinguishes our visual systems from cameras. “A photo camera, of course, does not learn from the images it takes,” Lamme said.
Finally, Lamme addressed the idea of whether higher-level functions such as attention and being able to report on what we see are required for the integrations that mark consciousness. The short answer is: no. You can read more about new work on why reporting, or reflecting, on what we see is not necessary for visual consciousness in our recent blog post. And you can watch a narration of Lamme’s slide presentation on YouTube here.
-Lisa M.P. Munoz
Follow the meeting on Twitter: @CogNeuroNews, #CNS2014 and read our blog for ongoing coverage.