Estimating 3D spatiotemporal point of regard: a device evaluation

被引:0
|
作者
Wagner, Peter [1 ,2 ]
Ho, Arthur [1 ,2 ]
Kim, Juno [2 ]
机构
[1] Brien Holden Vis Inst Ltd, Lv 4,RMB North Wing,14 Barker Str, Sydney, NSW 2052, Australia
[2] Univ New South Wales, Sch Optometry & Vis Sci, Lv 3,RMB North Wing,14 Barker Str, Sydney, NSW 2052, Australia
关键词
PIVOT POINT; EYE; LOCATION; PUPIL; ACCURACY; TRACKING; HUMANS; MODEL;
D O I
10.1364/JOSAA.457663
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
This paper presents and evaluates a system and method that record spatiotemporal scene information and location of the center of visual attention, i.e., spatiotemporal point of regard (PoR) in ecological environments. A primary research application of the proposed system and method is for enhancing current 2D visual attention models. Current eye-tracking approaches collapse a scene's depth structures to a 2D image, omitting visual cues that trigger important functions of the human visual system (e.g., accommodation and vergence). We combined head-mounted eye-tracking with a miniature time-of-flight camera to produce a system that could be used to estimate the spa-tiotemporal location of the PoR-the point of highest visual attention-within 3D scene layouts. Maintaining calibration accuracy is a primary challenge for gaze mapping; hence, we measured accuracy repeatedly by matching the PoR to fixated targets arranged within a range of working distances in depth. Accuracy was estimated as the deviation from estimated PoR relative to known locations of scene targets. We found that estimates of 3D PoR had an overall accuracy of approximately 2 degrees omnidirectional mean average error (OMAE) with variation over a 1 h recording maintained within 3.6 degrees OMAE. This method can be used to determine accommodation and vergence cues of the human visual system continuously within habitual environments, including everyday applications (e.g., use of hand-held devices). (c) 2022 Optica Publishing Group
引用
收藏
页码:1343 / 1351
页数:9
相关论文
共 50 条
  • [41] Estimating dimensions of free-swimming fish using 3D point distribution models
    Tillett, R
    McFarlane, N
    Lines, J
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2000, 79 (01) : 123 - 141
  • [42] Learning spatiotemporal lip dynamics in 3D point cloud stream for visual voice activity detection
    Zhang, Jie
    Cao, Jingyi
    Sun, Junhua
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 87
  • [43] 3D medical thermography device
    Moghadam, Peyman
    THERMOSENSE: THERMAL INFRARED APPLICATIONS XXXVII, 2015, 9485
  • [44] 3D printed device for epitachophoresis
    Voráčová, Ivona
    Přikryl, Jan
    Novotný, Jakub
    Datinská, Vladimíra
    Yang, Jaeyoung
    Astier, Yann
    Foret, František
    Analytica Chimica Acta, 2021, 1154
  • [45] 3DTouch: A wearable 3D input device for 3D applications
    Anh Nguyen
    Banic, Amy
    2015 IEEE VIRTUAL REALITY CONFERENCE (VR), 2015, : 373 - 373
  • [46] 3DTouch: A Wearable 3D Input Device for 3D Applications
    Anh Nguyen
    Banic, Amy
    2015 IEEE VIRTUAL REALITY CONFERENCE (VR), 2015, : 55 - 61
  • [47] 3D printed device for epitachophoresis
    Voracova, Ivona
    Prikryl, Jan
    Novotny, Jakub
    Datinska, Vladimira
    Yang, Jaeyoung
    Astier, Yann
    Foret, Frantisek
    ANALYTICA CHIMICA ACTA, 2021, 1154
  • [48] Single-Pass Composable 3D Lens Rendering and Spatiotemporal 3D Lenses
    Borst, Christoph W.
    Tiesel, Jan-Phillip
    Habib, Emad
    Das, Kaushik
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2011, 17 (09) : 1259 - 1272
  • [49] 6-4 3D Scanne (3D Input Device)
    Horikoshi T.
    Journal of the Institute of Image Electronics Engineers of Japan, 2019, 48 (01) : 96 - 97
  • [50] Evaluation of the correctness of a 3D recording device for mandibular functional movement in laboratory
    Zhao, Tian
    Sui, Huaxin
    Yang, Huifang
    Wang, Yong
    Sun, Yuchun
    INTERNATIONAL CONFERENCE ON OPTICAL AND PHOTONIC ENGINEERING (ICOPEN 2015), 2015, 9524