CNS 2023 Q&A: Anna Schapiro
Machine learning and artificial intelligence continue to progress, with much focus lately on new innovations like ChatGP, a chatbot that can give, sometimes shockingly, detailed responses to a variety of questions. In the background of these developments, cognitive neuroscientists continue to work to understand what makes humans such elegant learners, exploring the similarities and differences between machine and human learning.
For Anna Schapiro, a cognitive neuroscientist at the University of Pennsylvania, one important difference has to do with people’s ability to continuously learn in a changing environment. “This is easy for us humans, but surprisingly hard for standard neural network models, making it a big topic of interest in machine learning,” she says. In a recent paper, Schapiro and colleagues referred to this ability to learn amongst change as “graceful.”
Throughout Schapiro’s work, she and colleagues have explored complementary learning systems in the human hippocampus, which they posit includes a pathway for rapid statistical learning and one for episodic memory functions, as well as interactions with slower-learning neocortical learning systems. Among other findings, their work has shown, for example, that changing interactions between the hippocampus and neocortex across different stages of sleep can help a learner track information continuously over time. For this body of work on human learning, Schapiro is a co-recipient of the Young Investigator Award and will deliver her award lecture this March in San Francisco at the CNS annual meeting.
I spoke with Schapiro about this work and its significance, how she got started in this line of research, and what is next for her work.
CNS: Can you give an example of how humans gracefully learn over time?
Schapiro: To demonstrate the human ability to learn “gracefully” over time: If you are born in a country and learn language A as your native language, then move as an adult to a new country where language B is spoken and you no longer hear or speak A, you do not forget language A, at least not immediately or catastrophically. “Catastrophic forgetting” is the term used to describe the behavior of neural network models in these settings, which often exhibit profound retroactive interference, completely forgetting A after some exposure to B.
CNS: And you think that sleep helps drive that ability?
Schapiro: Yes, that is one part of the work I will be presenting. We show that in a neural network model with a hippocampus that retains recent information and a neocortex that holds longer term stores, hippocampal-driven replay during slow wave sleep helps the brain consolidate recent information, while cortically-driven replay during rapid eye movement sleep helps the brain revisit (and thereby protect) remote information. Alternating between these stages over the course of the night allows the model to learn new information without forgetting the old, accomplishing the kind of graceful learning over time that humans exhibit.
CNS: What got you started in cognitive neuroscience and what personally drives your work forward?
Schapiro: I have always wanted to understand how the brain produces the mind: How do vast networks of neurons work together to produce our behavior and our experience? As an undergraduate, I became very excited about neural network models as a tool for allowing us to understand this bridge from brain to mind. These artificial networks of simple neuron-like units somehow accomplish impressive tasks, often in a strikingly human-like way. I worked with these models as an undergraduate in Jay McClelland’s lab, and then wanted to learn how to run experiments to test the models in graduate school. I was lucky to get exposure to several empirical methodologies — behavior, fMRI, patient testing, and EEG/polysomnography — through my PhD and postdoc, while continuing to work with models. My work is propelled by the dialog between model and experiment: The models motivate predictions for experiments, and the experiments constrain and inspire advances in the models, creating a cycle that, hopefully, leads to both deeper and broader mechanistic understanding over time.
My work is propelled by the dialog between model and experiment: The models motivate predictions for experiments, and the experiments constrain and inspire advances in the models, creating a cycle that, hopefully, leads to both deeper and broader mechanistic understanding over time.
CNS: What do you most want people to know about the work you will present this March in San Francisco?
Schapiro: Neural network models typically use distributed representations, a powerful form of representation in which populations of neurons become responsive to multiple related features of the environment. The main take home of the talk is that we rapidly form these kinds of representations in the hippocampus and subsequently build them up in neocortex through hippocampal-cortical interactions during sleep.
CNS: What’s next for your research?
Schapiro: One new line of research in the lab that I am excited about uses the targeted memory reactivation (TMR) technique: Participants learn about some novel objects that have associated sounds, and we then quietly play some of the sounds at carefully-chosen moments during a subsequent nap to encourage the brain to process the associated objects. We are using the technique to investigate whether reactivation during sleep helps to construct “abstract” representations of recent information, the impact of the order of information reactivation during sleep, and how sleep may help to integrate new information into existing semantic knowledge.
CNS: What are you most looking forward to at the CNS annual meeting in San Francisco this March?
Schapiro: I am looking forward to catching up with friends and colleagues, hearing about all their latest exciting work, and the wonderful opportunity to share our lab’s research with everyone.
-Lisa M.P. Munoz
Leave a Reply
You must be logged in to post a comment.