MYFix: Automated Fixation Annotation of Eye-Tracking Videos

被引:0
|
作者
Alinaghi, Negar [1 ]
Hollendonner, Samuel [1 ]
Giannopoulos, Ioannis [1 ]
机构
[1] Vienna Univ Technol, Res Div Geoinformat, Wiedner Hauptstr 8-E120, A-1040 Vienna, Austria
关键词
automatic fixation annotation; object detection; semantic segmentation; outdoor mobile eye-tracking; GAZE;
D O I
10.3390/s24092666
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In mobile eye-tracking research, the automatic annotation of fixation points is an important yet difficult task, especially in varied and dynamic environments such as outdoor urban landscapes. This complexity is increased by the constant movement and dynamic nature of both the observer and their environment in urban spaces. This paper presents a novel approach that integrates the capabilities of two foundation models, YOLOv8 and Mask2Former, as a pipeline to automatically annotate fixation points without requiring additional training or fine-tuning. Our pipeline leverages YOLO's extensive training on the MS COCO dataset for object detection and Mask2Former's training on the Cityscapes dataset for semantic segmentation. This integration not only streamlines the annotation process but also improves accuracy and consistency, ensuring reliable annotations, even in complex scenes with multiple objects side by side or at different depths. Validation through two experiments showcases its efficiency, achieving 89.05% accuracy in a controlled data collection and 81.50% accuracy in a real-world outdoor wayfinding scenario. With an average runtime per frame of 1.61 +/- 0.35 s, our approach stands as a robust solution for automatic fixation annotation.
引用
下载
收藏
页数:21
相关论文
共 50 条
  • [21] Getting at the Cognitive Complexity of Linguistic Metadata Annotation - A Pilot Study Using Eye-Tracking
    Lohmann, Steffen
    Tomanek, Katrin
    Ziegler, Juergen
    Hahn, Udo
    COGNITION IN FLUX, 2010, : 2146 - 2151
  • [22] Visual saliency in captioned digital videos and learning of English collocations: An eye-tracking study
    Choi, Sungmook
    LANGUAGE LEARNING & TECHNOLOGY, 2023, 27 (01): : 28 - 28
  • [23] Development and Calibration of an Eye-Tracking Fixation Identification Algorithm for Immersive Virtual Reality
    Llanes-Jurado, Jose
    Marin-Morales, Javier
    Guixeres, Jaime
    Alcaniz, Mariano
    SENSORS, 2020, 20 (17) : 1 - 15
  • [24] Correlating Eye-Tracking Fixation Metrics and Neuropsychological Assessment after Ischemic Stroke
    Ionescu, Alec
    Stefanescu, Emanuel
    Strilciuc, Stefan
    Rafila, Alexandru
    Muresanu, Dafin
    MEDICINA-LITHUANIA, 2023, 59 (08):
  • [25] EYE-TRACKING INFRARED OPTOMETER
    OKUYAMA, F
    TOKORO, T
    FUJIEDA, M
    OPHTHALMIC AND PHYSIOLOGICAL OPTICS, 1990, 10 (03) : 291 - 299
  • [26] EYE-TRACKING PATTERNS IN SCHIZOPHRENIA
    HOLZMAN, PS
    PROCTOR, LR
    HUGHES, DW
    SCIENCE, 1973, 181 (4095) : 179 - 181
  • [27] Eye-tracking Social Preferences
    Jiang, Ting
    Potters, Jan
    Funaki, Yukihiko
    JOURNAL OF BEHAVIORAL DECISION MAKING, 2016, 29 (2-3) : 157 - 168
  • [28] EYE-TRACKING PATTERNS IN SCHIZOPHRENIA
    TROOST, BT
    DAROFF, RB
    DELLOSSO, LF
    SCIENCE, 1974, 184 (4142) : 1202 - 1203
  • [29] Smart Eye-Tracking System
    Juhong, Aniwat
    Treebupachatsakul, T.
    Pintavirooj, C.
    2018 INTERNATIONAL WORKSHOP ON ADVANCED IMAGE TECHNOLOGY (IWAIT), 2018,
  • [30] Eye-tracking and conceptual combination
    Janetzko, D
    PROCEEDINGS OF THE TWENTY-SECOND ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY, 2000, : 687 - 692