Schedule of Events | Symposia

Sketchpad Series

Building multimodal classification of attentional states in the laboratory

Poster Session A - Saturday, March 29, 2025, 3:00 – 5:00 pm EDT, Back Bay Ballroom/Republic Ballroom

John Thorp1 (john.n.thorp@gmail.com), Joshua Friedman1, Hengbo Tong1,2, Helin Wang2, Ruoxuan Li1, Lily Penn1, Emile Al-Billeh1, Alfredo Spagna1, Xiaofu He1,2; 1Columbia University, 2New York State Psychiatric Institute

As our student body and general populace becomes ever more quantified and evaluated, it is increasingly relevant to better understand what information on attentional states can be validly gleaned from observational data. We were therefore interested in the degree to which observational data collected from three separate modalities (brain activity collected via mobile EEG headsets, facial action amplitudes, and posture) during video lectures could be used to classify attentional states that predict subsequent learning of material. Here, we define subsequent learning as meeting three conditions: 1) a correct response on a multiple choice basic definition question, 2) a correct response on a multiple choice generalized application question, and 3) reporting not knowing these answers before the lecture. For each of the three modalities, we trained unimodal classifiers to predict the probability that participants were attending. Of these, facial action amplitudes provided the most reliable classification of attentional states. Analyses are currently focused on fusing each of these data streams into one multimodal classifier that can outperform unimodal approaches and better contextualize which features are necessary for insight into attention states. In the future, these models can be implemented in a neurofeedback framework intended to improve volitional attention regulation in the classroom.

Topic Area: ATTENTION: Multisensory

CNS Account Login

CNS2025-Logo_FNL_HZ-150_REV

March 29–April 1  |  2025

Latest from Twitter