Dual Input Stream Transformer for Vertical Drift Correction in Eye-tracking Reading Data

被引:0
|
作者
Mercier T.M. [1 ]
Budka M. [2 ]
Vasilev M.R. [4 ]
Kirkby J.A. [1 ]
Angele B. [5 ]
Slattery T.J. [1 ]
机构
[1] Department of Psychology, Bournemouth University, Poole, Dorset
[2] Informatics, Bournemouth University, Poole, Dorset
[3] Department of Experimental Psychology, University College London, London
[4] on, CINC), Universidad Antonio de Nebrija, Madrid
关键词
Artificial Intelligence; Computer vision; Data models; Gaze tracking; Machine Learning; Noise; Pattern Recognition; Psychology; Task analysis; Transformers; Visualization;
D O I
10.1109/TPAMI.2024.3411938
中图分类号
学科分类号
摘要
We introduce a novel Dual Input Stream Transformer (DIST) for the challenging problem of assigning fixation points from eye-tracking data collected during passage reading to the line of text that the reader was actually focused on. This post-processing step is crucial for analysis of the reading data due to the presence of noise in the form of vertical drift. We evaluate DIST against eleven classical approaches on a comprehensive suite of nine diverse datasets. We demonstrate that combining multiple instances of the DIST model in an ensemble achieves high accuracy across all datasets. Further combining the DIST ensemble with the best classical approach yields an average accuracy of 98.17 %. Our approach presents a significant step towards addressing the bottleneck of manual line assignment in reading research. Through extensive analysis and ablation studies, we identify key factors that contribute to DIST's success, including the incorporation of line overlap features and the use of a second input stream. Via rigorous evaluation, we demonstrate that DIST is robust to various experimental setups, making it a safe first choice for practitioners in the field. Authors
引用
收藏
页码:1 / 12
页数:11
相关论文
共 50 条
  • [41] Reading Span Test for Brazilian Portuguese: An Eye-Tracking Implementation
    Riascos, Jaime A.
    Brugger, Arthur M.
    Borges, Priscila
    Areas da Luz Fontes, Ana B.
    Barone, Dante C.
    COMPUTATIONAL NEUROSCIENCE, 2019, 1068 : 104 - 118
  • [42] Word and pseudoword reading in young adults: an eye-tracking study
    Marchezini, Fernanda
    Claessens, Peter Maurice Erna
    Carthery-Goulart, Maria Teresa
    CODAS, 2022, 34 (04):
  • [43] Expertise in action: An eye-tracking investigation of golf green reading
    Campbell, M.
    Moran, A.
    PERCEPTION, 2011, 40 : 111 - 111
  • [44] Universality in reading processes: Evidence from an eye-tracking study
    Matsunaga, S
    PSYCHOLOGIA, 1999, 42 (04) : 290 - 306
  • [45] Early phonological activation in reading Kanji: An eye-tracking study
    Matsunaga, S
    COGNITIVE NEUROSCIENCE STUDIES OF THE CHINESE LANGUAGE, 2002, : 157 - 171
  • [46] Rhythmic subvocalization: An eye-tracking study on silent poetry reading
    Beck, Judith
    Konieczny, Lars
    JOURNAL OF EYE MOVEMENT RESEARCH, 2020, 13 (03): : 1 - 40
  • [47] Fake News Reading on Social Media: An Eye-tracking Study
    Simko, Jakub
    Hanakova, Martina
    Racsko, Patrik
    Tomlein, Matus
    Moro, Robert
    Bielikova, Maria
    PROCEEDINGS OF THE 30TH ACM CONFERENCE ON HYPERTEXT AND SOCIAL MEDIA (HT '19), 2019, : 221 - 230
  • [48] A statistical evaluation of eye-tracking data of screening mammography: Effects of expertise and experience on image reading
    Leveque, Lucie
    Vande Berg, Baptiste
    Bosmans, Hilde
    Cockmartin, Lesley
    Keupers, Machteld
    Van Ongeval, Chantal
    Liu, Hantao
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2019, 78 : 86 - 93
  • [49] The ZuCo benchmark on cross-subject reading task classification with EEG and eye-tracking data
    Hollenstein, Nora
    Trondle, Marius
    Plomecka, Martyna
    Kiegeland, Samuel
    Ozyurt, Yilmazcan
    Jaeger, Lena A.
    Langer, Nicolas
    FRONTIERS IN PSYCHOLOGY, 2023, 13
  • [50] Supervised EEG Ocular Artefact Correction Through Eye-Tracking
    Lourenco, P. Rente
    Abbott, W. W.
    Faisal, A. Aldo
    ADVANCES IN NEUROTECHNOLOGY, ELECTRONICS AND INFORMATICS, 2016, 12 : 99 - 113