MYFix: Automated Fixation Annotation of Eye-Tracking Videos

被引:0
|
作者
Alinaghi, Negar [1 ]
Hollendonner, Samuel [1 ]
Giannopoulos, Ioannis [1 ]
机构
[1] Vienna Univ Technol, Res Div Geoinformat, Wiedner Hauptstr 8-E120, A-1040 Vienna, Austria
关键词
automatic fixation annotation; object detection; semantic segmentation; outdoor mobile eye-tracking; GAZE;
D O I
10.3390/s24092666
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 +/- 0.35 s, our approach stands as a robust solution for automatic fixation annotation.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Autism, Eye-Tracking, Entropy
    Shic, Frederick
    Chawarska, Katarzyna
    Bradshaw, Jessica
    Scassellati, Brian
    2008 IEEE 7TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, 2008, : 73 - 78
  • [32] Automated assessment of grating acuity in infants and toddlers using an eye-tracking system
    Wen, Jing
    Yang, Bikun
    Li, Xiaoqing
    Cui, Jinshi
    Wang, Li
    JOURNAL OF VISION, 2022, 22 (12):
  • [33] Active eye-tracking for AOSLO
    Sheehy, Christy K.
    Sabesan, Ramkumar
    Tiruveedhula, Pavan N.
    Yang, Qiang
    Roorda, Austin
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2014, 55 (13)
  • [34] (The limits of) eye-tracking with iPads
    Taore, Aryaman
    Tiang, Michelle
    Dakin, Steven C.
    JOURNAL OF VISION, 2024, 24 (07): : 1
  • [35] Choice and eye-tracking behaviour
    Sakagami, Takayuki
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2012, 47 : 377 - 377
  • [36] Improving Automated Source Code Summarization via an Eye-Tracking Study of Programmers
    Rodeghero, Paige
    McMillan, Collin
    McBurney, Paul W.
    Bosch, Nigel
    D'Mello, Sidney
    36TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2014), 2014, : 390 - 401
  • [37] L2 learners' engagement with automated feedback: An eye-tracking study
    Liu, Sha
    Yu, Guoxing
    LANGUAGE LEARNING & TECHNOLOGY, 2022, 26 (02): : 78 - 105
  • [38] Measuring Driver Perception: Combining Eye-Tracking and Automated Road Scene Perception
    Stapel, Jork
    El Hassnaoui, Mounir
    Happee, Riender
    HUMAN FACTORS, 2022, 64 (04) : 714 - 731
  • [39] Predicting Eyes' Fixations in Movie Videos: Visual Saliency Experiments on a New Eye-Tracking Database
    Koutras, Petros
    Katsamanis, Athanasios
    Maragos, Petros
    ENGINEERING PSYCHOLOGY AND COGNITIVE ERGONOMICS, EPCE 2014, 2014, 8532 : 183 - 194
  • [40] A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research
    Hessels, Roy S.
    Benjamins, Jeroen S.
    Cornelissen, Tim H. W.
    Hooge, Ignace T. C.
    FRONTIERS IN PSYCHOLOGY, 2018, 9