Modeling spatio-temporal patterns in intensive binary time series eye-tracking data using Generalized Additive Mixed Models

被引:0
|
作者
Brown-Schmidt, Sarah [1 ]
Cho, Sun-Joo [1 ]
Fenn, Kimberly M. [2 ]
Trude, Alison M. [3 ]
机构
[1] Vanderbilt Univ, Dept Psychol & Human Dev, Nashville, TN 37235 USA
[2] Michigan State Univ, Dept Psychol, E Lansing, MI USA
[3] Univ Illinois, Dept Psychol, Champaign, IL USA
基金
美国国家科学基金会;
关键词
Visual-world eye-tracking; Speech perception; Spatio-temporal GAMM; Dynamic GLMM; SLEEP SPINDLE ACTIVITY; VISUAL WORLD; SMOOTHING PARAMETER; SPEECH-PERCEPTION; SPOKEN LANGUAGE; CONSOLIDATION; MEMORY; RECOGNITION; INTEGRATION; ADAPTATION;
D O I
10.1016/j.brainres.2025.149511
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The aim of this paper is to introduce and illustrate the use of Generalized Additive Mixed Models (GAMM) for analyzing intensive binary time-series eye-tracking data. The spatio-temporal GAMM was applied to intensive binary time-series eye-tracking data. In doing so, we reveal that both fixed condition effects, as well as previously documented temporal contingencies in this type of data vary over time during speech perception. Further, spatial relationships between the point of fixation and the candidate referents on screen modulate the probability of an upcoming target fixation, and this pull (and push) on fixations changes over time as the speech is being perceived. This technique provides a way to not only account for the dominant autoregressive patterns typically seen in visual-world eye-tracking data, but does so in a way that allows modeling crossed random effects (by person and item, as typical in psycholinguistics datasets), and to model complex relationships between space and time that emerge in eye-tracking data. This new technique offers ways to ask, and answer new questions in the world of language use and processing.
引用
收藏
页数:19
相关论文
共 50 条