Q&A with Sam Gershman
In the last decade, computational techniques have expanded the toolkit for scientists across disciplines. In neuroscience, computational models are increasingly rendering “visible things that were previously invisible,” says Samuel Gershman, a cognitive neuroscientist at Harvard University.
“Computational modeling is not a niche activity. It’s the same theory-building activity in which all cognitive neuroscientists engage,” Gershman explains. “The difference is that computational models allow us to be very precise about our hypotheses, making them more falsifiable. The hypotheses also tend to be more sophisticated.”
For his computational work on how humans and animals learn to make long-term decisions, an area known as “reinforcement learning,” Gershman has received the CNS Young Investigator Award. “The long-term goal of my work is to understand how the brain produces cognition, and in particular all the cognitive sorcery that we identify as intelligence,” Gershman says.
We spoke with Gershman about this body of work, which he will present at the CNS 2020 meeting in Boston this March, as well as how he got started in the field, some myths surrounding dopamine, and next steps for his work.
CNS: Why do you personally study cognitive neuroscience, and reinforcement learning in particular? What got you interested in the topic?
Gershman: I got really interested in cognitive neuroscience after taking a class on it in my first year of college. It immediately captured my imagination — this idea that we could deconstruct our minds into underlying mechanisms realized in a physical system. I had been exposed to artificial intelligence and cognitive science for many years — my dad was an old-school AI researcher — and cognitive neuroscience seemed like an exciting way to pursue those old questions about intelligence.
I started working in research labs, bouncing around between many different topics — including episodic memory, attention, emotion regulation — before working on reinforcement learning in Nathaniel Daw’s lab at NYU. At first, I was hopelessly confused about reinforcement learning, like most topics in computational neuroscience. I had a sense that this was important and I should understand the math, but it took a long time before things started to make sense.
One of the remarkable confluences of modern cognitive neuroscience was the discovery that the same algorithms computer scientists use to solve these problems are also used by the brain.
CNS: So how do you like to describe reinforcement learning to people outside the field?
Gershman: Most of our consequential decisions — where to go to school, what assets to invest in, what job to take, when to start a family — require long-term planning; we have to think not just about a single action but also about sequences of actions with possibly distant consequences. These decisions are difficult because there is a branching tree of future possibilities. Reinforcement learning is the study of how to efficiently solve such problems. One of the remarkable confluences of modern cognitive neuroscience was the discovery that the same algorithms computer scientists use to solve these problems are also used by the brain.
CNS: What have been the biggest insights your body of computational work has brought to bear on understanding reinforcement learning?
Gershman: Before I started thinking about these problems, there were some very successful theoretical ideas about how the brain might implement reinforcement learning, most notably the idea that dopamine neurons report a “reward prediction error.” This error is the discrepancy between observed and expected reward — which can be used by the basal ganglia to update reward expectations and was usually thought to be encoded by neurons in the striatum. However, there were many indications that things were more complicated.
Just looking narrowly at dopamine, it was clear that dopamine neurons were sensitive to different forms of uncertainty, and also that dopamine was sensitive to things other than reward, like sensory attributes of stimuli. My contribution has been to develop a new generation of theories that make sense of these deviations from the classical theory, which in turn led to new experiments.
CNS: Can you give an example of this please?
Gershman: I showed how the sensitivity to sensory attributes could be explained as a form of “generalized prediction error” — the discrepancy between observed and expected sensory attributes — which turns out to be very useful computationally for solving the reinforcement learning problem. I collaborated with Geoff Schoenbaum at NIH to test predictions of this model; his lab showed that you could decode sensory information from ensembles of dopamine neurons, and that the information diminished over the course of learning, as you would expect from a generalized prediction error that gradually diminished.
CNS: We see dopamine in the news a lot. What do you find to be a common misconception about dopamine?
Gershman: The major misconception of dopamine, often called the “pleasure molecule,” is that it directly reports reward. But classic studies by Wolfram Schultz showed that dopamine neurons fire only in response to unexpected reward. If the reward is entirely predictable, dopamine neurons do not respond nearly as much.
CNS: Is there a single piece of data or study you are most excited to share in your CNS 2020 talk in March in Boston?
Gershman: I haven’t figured out yet exactly which data I’m going to show in my talk. Probably I will talk about the relationship between dopamine and belief states, which led to the finding that the dopamine response to reward can sometimes be a non-monotonic function of reward magnitude (Babayan et al., 2018), which was a highly non-trivial prediction of the theory.
CNS: What are the next steps for your work?
Gershman: I’m currently involved in several projects aiming to tie together different circuits — striatum, midbrain, hippocampus, prefrontal cortex — that play distinct roles in the brain’s reinforcement learning architecture.
CNS: Finally, what are you most looking forward to about the CNS meeting in Boston?
Gershman: I’m most looking forward to catching up with old friends.
-Lisa M.P. Munoz
Leave a Reply
You must be logged in to post a comment.