A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research

被引:24
|
作者
Hessels, Roy S. [1 ,2 ]
Benjamins, Jeroen S. [1 ,3 ]
Cornelissen, Tim H. W. [4 ]
Hooge, Ignace T. C. [1 ]
机构
[1] Univ Utrecht, Helmholtz Inst, Expt Psychol, Utrecht, Netherlands
[2] Univ Utrecht, Dev Psychol, Utrecht, Netherlands
[3] Univ Utrecht, Social Hlth & Org Psychol, Utrecht, Netherlands
[4] Goethe Univ Frankfurt, Dept Cognit Psychol, Scene Grammar Lab, Frankfurt, Germany
来源
FRONTIERS IN PSYCHOLOGY | 2018年 / 9卷
关键词
eye tracking; Areas of Interest; faces; automatic; videos; INFANTS; GAZE; ATTENTION; AUTISM; MOUTH;
D O I
10.3389/fpsyg.2018.01367
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
When mapping eye-movement behavior to the visual information presented to an observer, Areas of Interest (AOIs) are commonly employed. For static stimuli (screen without moving elements), this requires that one AOI set is constructed for each stimulus, a possibility in most eye-tracker manufacturers' software. For moving stimuli (screens with moving elements), however, it is often a time-consuming process, as AOIs have to be constructed for each video frame. A popular use-case for such moving AOIs is to study gaze behavior to moving faces. Although it is technically possible to construct AOIs automatically, the standard in this field is still manual AOI construction. This is likely due to the fact that automatic AOI-construction methods are (1) technically complex, or (2) not effective enough for empirical research. To aid researchers in this field, we present and validate a method that automatically achieves AOI construction for videos containing a face. The fully-automatic method uses an open-source toolbox for facial landmark detection, and a Voronoi-based AOI-construction method. We compared the position of AOIs obtained using our new method, and the eye-tracking measures derived from it, to a recently published semi-automatic method. The differences between the two methods were negligible. The presented method is therefore both effective (as effective as previous methods), and efficient, no researcher time is needed for AOI construction. The software is freely available from https://osf.io/zgmeh/.
引用
收藏
页数:8
相关论文
共 16 条
  • [11] Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest
    Zeng, Guangyu
    Simpson, Elizabeth A.
    Paukner, Annika
    BEHAVIOR RESEARCH METHODS, 2024, 56 (02) : 881 - 907
  • [12] Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest
    Guangyu Zeng
    Elizabeth A. Simpson
    Annika Paukner
    Behavior Research Methods, 2024, 56 : 881 - 907
  • [13] Can Stylized Products Generated by AI Better Attract User Attention? Using Eye-Tracking Technology for Research
    Tang, Yunjing
    Chen, Chen
    APPLIED SCIENCES-BASEL, 2024, 14 (17):
  • [14] Eye-tracking in recent L2 learner process research: A review of areas, issues, and methodological approaches
    Latif, Muhammad M. M. Abdel
    SYSTEM, 2019, 83 : 25 - 35
  • [15] Validation of an open source, remote web-based eye-tracking method (WebGazer) for research in early childhood
    Steffan, Adrian
    Zimmer, Lucie
    Arias-Trejo, Natalia
    Bohn, Manuel
    Dal Ben, Rodrigo
    Flores-Coronado, Marco A.
    Franchin, Laura
    Garbisch, Isa
    Grosse Wiesmann, Charlotte
    Hamlin, J. Kiley
    Havron, Naomi
    Hay, Jessica F.
    Hermansen, Tone K.
    Jakobsen, Krisztina V.
    Kalinke, Steven
    Ko, Eon-Suk
    Kulke, Louisa
    Mayor, Julien
    Meristo, Marek
    Moreau, David
    Mun, Seongmin
    Prein, Julia
    Rakoczy, Hannes
    Rothmaler, Katrin
    Santos Oliveira, Daniela
    Simpson, Elizabeth A.
    Sirois, Sylvain
    Smith, Eleanor S.
    Strid, Karin
    Tebbe, Anna-Lena
    Thiele, Maleen
    Yuen, Francis
    Schuwerk, Tobias
    INFANCY, 2024, 29 (01) : 31 - 55
  • [16] A deep learning saliency model for exploring viewers' dwell-time distributions over Areas Of Interest on webcam-based eye-tracking data
    Bruno, Alessandro
    Tliba, Marouane
    Coltekin, Arzu
    PERCEPTION, 2021, 50 (1_SUPPL) : 224 - 225