Gaze-based attention network analysis in a virtual reality classroom

被引:1
|
作者
Stark, Philipp [1 ]
Hasenbein, Lisa [1 ]
Kasneci, Enkelejda [2 ]
Goellner, Richard [1 ,3 ]
机构
[1] Univ Tubingen, Hector Res Inst, Europastr 6, D-72072 Tubingen, Germany
[2] Tech Univ Munich, Chair Human Ctr Technol Learning, Arcisstr 21, D-80333 Munich, Germany
[3] Univ Regensburg, Inst Educ Sci, Univ Str 31, D-93053 Regensburg, Germany
关键词
Network analysis; Eye tracking; Virtual reality; Gaze-ray casting; Visual attention; Graph theory; CENTRALITY; FIXATIONS;
D O I
10.1016/j.mex.2024.102662
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye -tracking data. We propose a solution to the challenges of obtaining relevant eye -tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gazeray casting," we simulated 3D -gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. center dot Guidelines on implementing gaze -ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye. center dot Creating gaze -based attention networks and analyzing their network structure. center dot Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only = 1b6da45eb93e4f9eb7a138697b941198 .
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Gaze-based Kinaesthetic Interaction for Virtual Reality
    Li, Zhenxing
    Akkil, Deepak
    Raisamo, Roope
    [J]. INTERACTING WITH COMPUTERS, 2020, 32 (01) : 17 - 32
  • [2] Gaze-based attention refocusing training in virtual reality for adult attention-deficit/hyperactivity disorder
    Benjamin Selaskowski
    Laura Marie Asché
    Annika Wiebe
    Kyra Kannen
    Behrem Aslan
    Thiago Morano Gerding
    Dario Sanchez
    Ulrich Ettinger
    Markus Kölle
    Silke Lux
    Alexandra Philipsen
    Niclas Braun
    [J]. BMC Psychiatry, 23
  • [3] Gaze-based attention refocusing training in virtual reality for adult attention-deficit/hyperactivity disorder
    Selaskowski, Benjamin
    Asche, Laura Marie
    Wiebe, Annika
    Kannen, Kyra
    Aslan, Behrem
    Gerding, Thiago Morano
    Sanchez, Dario
    Ettinger, Ulrich
    Koelle, Markus
    Lux, Silke
    Philipsen, Alexandra
    Braun, Niclas
    [J]. BMC PSYCHIATRY, 2023, 23 (01)
  • [4] Gaze-Based Interaction Intention Recognition in Virtual Reality
    Chen, Xiao-Lin
    Hou, Wen-Jun
    [J]. ELECTRONICS, 2022, 11 (10)
  • [5] Gaze-Based Monitoring in the Classroom
    Burch, Michael
    [J]. ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2023, 2023,
  • [6] Investigating shared attention with a virtual agent using a gaze-based interface
    Peters, Christopher
    Asteriadis, Stylianos
    Karpouzis, Kostas
    [J]. JOURNAL ON MULTIMODAL USER INTERFACES, 2010, 3 (1-2) : 119 - 130
  • [7] Investigating shared attention with a virtual agent using a gaze-based interface
    Christopher Peters
    Stylianos Asteriadis
    Kostas Karpouzis
    [J]. Journal on Multimodal User Interfaces, 2010, 3 : 119 - 130
  • [8] Gaze-based Interaction for Virtual Environments
    Jimenez, Jorge
    Gutierrez, Diego
    Latorre, Pedro
    [J]. JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2008, 14 (19) : 3085 - 3098
  • [9] EyeHacker: Gaze-Based Automatic Reality Manipulation
    Ito, Daichi
    Wakisaka, Sohei
    Izumihara, Atsushi
    Yamaguchi, Tomoya
    Hiyama, Atsushi
    Inami, Masahiko
    [J]. ACM SIGGRAPH 2019 EMERGING TECHNOLOGIES (SIGGRAPH '19), 2019,
  • [10] GazeCues: Exploring the Effects of Gaze-based Visual Cues in Virtual Reality Exploration Games
    Lankes M.
    Ramirez Gomez A.
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2022, 6