MYFix: Automated Fixation Annotation of Eye-Tracking Videos

被引:0
|
作者
Alinaghi, Negar [1 ]
Hollendonner, Samuel [1 ]
Giannopoulos, Ioannis [1 ]
机构
[1] Vienna Univ Technol, Res Div Geoinformat, Wiedner Hauptstr 8-E120, A-1040 Vienna, Austria
关键词
automatic fixation annotation; object detection; semantic segmentation; outdoor mobile eye-tracking; GAZE;
D O I
10.3390/s24092666
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 +/- 0.35 s, our approach stands as a robust solution for automatic fixation annotation.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] EyeTrackUAV2: A Large-Scale Binocular Eye-Tracking Dataset for UAV Videos
    Perrin, Anne-Flore
    Krassanakis, Vassilios
    Zhang, Lu
    Ricordel, Vincent
    Perreira Da Silva, Matthieu
    Le Meur, Olivier
    DRONES, 2020, 4 (01) : 1 - 25
  • [42] All eyes on the signal? - Mapping cohesive discourse structures with eye-tracking data of explanation videos
    Thiele, Leandra
    Schmidt-Borcherding, Florian
    Bateman, John A.
    FRONTIERS IN COMMUNICATION, 2024, 9
  • [43] How eye-catching are natural features when walking through a park? Eye-tracking responses to videos of walks
    Amati, Marco
    Parmehr, Ebadat Ghanbari
    McCarthy, Chris
    Sita, Jodi
    URBAN FORESTRY & URBAN GREENING, 2018, 31 : 67 - 78
  • [44] Quantification of fixation stability of upward gaze in myasthenia gravis by using an eye-tracking system
    Mihara, Miharu
    Kakeue, Ken
    Fujita, Kazuya
    Tamura, Ryoi
    Hayashi, Atsushi
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2016, 57 (12)
  • [45] Cleaning up systematic error in eye-tracking data by using required fixation locations
    Hornof, AJ
    Halverson, T
    BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS, 2002, 34 (04): : 592 - 604
  • [46] Cleaning up systematic error in eye-tracking data by using required fixation locations
    Anthony J. Hornof
    Tim Halverson
    Behavior Research Methods, Instruments, & Computers, 2002, 34 : 592 - 604
  • [47] An Automated Method for Assessing Visual Acuity in Infants and Toddlers Using an Eye-Tracking System
    Wen, Jing
    Yang, Bikun
    Cui, Jinshi
    Wang, Li
    Li, Xiaoqing
    JOVE-JOURNAL OF VISUALIZED EXPERIMENTS, 2023, (193):
  • [48] SMART-T: A system for novel fully automated anticipatory eye-tracking paradigms
    Shukla, Mohinish
    Wen, Johnny
    White, Katherine S.
    Aslin, Richard N.
    BEHAVIOR RESEARCH METHODS, 2011, 43 (02) : 384 - 398
  • [49] An Eye-Tracking Study of Differences in Reading Between Automated and Human-Written News
    Jia, Chenyan
    Gwizdka, Jacek
    INFORMATION SYSTEMS AND NEUROSCIENCE, NEUROIS RETREAT 2020, 2020, 43 : 100 - 110
  • [50] SMART-T: A system for novel fully automated anticipatory eye-tracking paradigms
    Mohinish Shukla
    Johnny Wen
    Katherine S. White
    Richard N. Aslin
    Behavior Research Methods, 2011, 43 : 384 - 398