Schedule of Events | Symposia

Language models capture efficient information compression in human memory

Poster Session C - Sunday, March 30, 2025, 5:00 – 7:00 pm EDT, Back Bay Ballroom/Republic Ballroom

Jianing Mu1, Alexander G. Huth1, Alison R. Preston1; 1the Univeristy of Texas at Austin

We experience a continuous stream of information, but we do not remember everything equally well. To explain non-uniform memory, current theories suggest that humans selectively encode surprising moments (e.g. event boundaries), a costly strategy that overlooks the online extraction of the central meaning of the experience, a crucial ability of our memory system. Instead, we propose an efficient strategy where humans uniformly sample incoming information at periodic intervals. By leveraging knowledge of the stimulus structure, this strategy simultaneously encodes specific moments and the gist of our experience: moments tied to the rest of the experience are better remembered than standalone ones. We tested this hypothesis using data from 413 participants who listened to narrative stories while segmenting the stories into events. Participants then verbally recalled the stories immediately after they ended. Using large language models (LLMs), we quantified the stimuli’s information structure and participants’ recall quality. Across eight stories, our incremental uniform sampling model effectively predicted participants’ recall beyond alternative models such as event surprise. We further showed that event boundaries are better remembered because they share more information with the rest of the story, rather than having higher surprise. Moreover, by adjusting the sampling rate, one can flexibly recall at varying levels of detail. By changing how LLMs retrieve information from the story, we found that frequent sampling produces more detailed recall, while relying on existing knowledge yields more gist-like recalls. Overall, these results show that knowing the informational structure of natural experiences enables efficient memory encoding and retrieval.

Topic Area: LONG-TERM MEMORY: Episodic

CNS Account Login

CNS2025-Logo_FNL_HZ-150_REV

March 29–April 1  |  2025

Latest from Twitter