Schedule of Events | Symposia

Representational similarity analysis of brain potentials reveals event/schematic activation during fictional language comprehension

Poster Session E - Monday, March 31, 2025, 2:30 – 4:30 pm EDT, Back Bay Ballroom/Republic Ballroom

Melissa Troyer1 (melissa.troyer@gmail.com), Ryan J. Hubbard2; 1University of Nevada Las Vegas, 2University at Albany, State University of New York

One under-explored contributor to individual variability in prediction during language comprehension is relevant domain knowledge. Individuals with more knowledge could have MORE SUCCESSFUL predictions of upcoming linguistic input, but they might engage in prediction LESS, as they stand to learn less from comparing their predictions to the input. To explore this, we used representational similarity analysis (RSA) to analyze existing data wherein EEG was recorded as individuals with a range of knowledge about Harry Potter (HP) read general-topic vs. HP sentences, each ending with a predictable or unpredictable critical, sentence-final word. If highly-knowledgeable individuals are more likely to engage in active prediction, then neural similarity measured between ERPs to penultimate and critical words should be graded according to HP knowledge for HP-predictable (but not unpredictable) words and not in general-topic sentences, as these final words should be pre-activated more strongly. We did not observe such a relationship, although HP knowledge reduced the speed of the onset of neural similarity changes. However, HP sentences led to greater overall pre-final to final word similarity across participants compared to general-topic sentences. This difference was robust and sustained sustained across time. HP sentences seem more likely than general-topic sentences to engender construction of a rich mental model of events being described; thus, we suggest that RSA may detect event/schematic information (e.g., situation models) activated during language comprehension, rather than word-by-word prediction. In future work, we aim to use converging methods to further clarify the role of domain knowledge in prediction during language comprehension.

Topic Area: LANGUAGE: Semantic

CNS Account Login

CNS2025-Logo_FNL_HZ-150_REV

March 29–April 1  |  2025

Latest from Twitter