Gaze-Based Interaction Intention Recognition in Virtual Reality

被引:6
|
作者
Chen, Xiao-Lin [1 ,2 ]
Hou, Wen-Jun [1 ,3 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Key Lab Network Syst & Network Culture, Beijing 100876, Peoples R China
[2] Beijing Univ Posts & Telecommun, Sch Automat, Beijing 100876, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Digital Media & Design Arts, Beijing 100876, Peoples R China
关键词
intention prediction; virtual reality; gaze-based interaction; EYE-TRACKING; MOVEMENT;
D O I
10.3390/electronics11101647
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing need for eye tracking in head-mounted virtual reality displays, the gaze-based modality has the potential to predict user intention and unlock intuitive new interaction schemes. In the present work, we explore whether gaze-based data and hand-eye coordination data can predict a user's interaction intention with the digital world, which could be used to develop predictive interfaces. We validate it on the eye-tracking data collected from 10 participants in item selection and teleporting tasks in virtual reality. We demonstrate successful prediction of the onset of item selection and teleporting with an 0.943 F-1-Score using a Gradient Boosting Decision Tree, which is the best among the four classifiers compared, while the model size of the Support Vector Machine is the smallest. It is also proven that hand-eye-coordination-related features can improve interaction intention recognition in virtual reality environments.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Gaze-based Kinaesthetic Interaction for Virtual Reality
    Li, Zhenxing
    Akkil, Deepak
    Raisamo, Roope
    [J]. INTERACTING WITH COMPUTERS, 2020, 32 (01) : 17 - 32
  • [2] Gaze-based Interaction for Virtual Environments
    Jimenez, Jorge
    Gutierrez, Diego
    Latorre, Pedro
    [J]. JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2008, 14 (19) : 3085 - 3098
  • [3] Gaze-Based Intention Recognition for Human-Robot Collaboration
    Belcamino, Valerio
    Takase, Miwa
    Kilina, Mariya
    Carfi, Alessandro
    Shimada, Akira
    Shimizu, Sota
    Mastrogiovanni, Fulvio
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ADVANCED VISUAL INTERFACES, AVI 2024, 2024,
  • [4] Gaze-based attention network analysis in a virtual reality classroom
    Stark, Philipp
    Hasenbein, Lisa
    Kasneci, Enkelejda
    Goellner, Richard
    [J]. METHODSX, 2024, 12
  • [5] Gaze-based prediction of pen-based virtual interaction tasks
    Cig, Cagia
    Sezgin, Tevfik Metin
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2015, 73 : 91 - 106
  • [6] Multimodal Gaze-based Interaction
    Pfeiffer, Thies
    Wachsmuth, Ipke
    [J]. AT-AUTOMATISIERUNGSTECHNIK, 2013, 61 (11) : 770 - 776
  • [7] Gaze-Based Human-SmartHome-Interaction by Augmented Reality Controls
    Cottin, Tim
    Nordheimer, Eugen
    Wagner, Achim
    Badreddin, Essameddin
    [J]. ADVANCES IN ROBOT DESIGN AND INTELLIGENT CONTROL, 2017, 540 : 378 - 385
  • [8] GazeAR: Mobile Gaze-Based Interaction in the Context of Augmented Reality Games
    Lankes, Michael
    Stiglbauer, Barbara
    [J]. AUGMENTED REALITY, VIRTUAL REALITY, AND COMPUTER GRAPHICS, PT I, 2016, 9768 : 397 - 406
  • [9] GazeEMD: Detecting Visual Intention in Gaze-Based Human-Robot Interaction
    Shi, Lei
    Copot, Cosmin
    Vanlanduit, Steve
    [J]. ROBOTICS, 2021, 10 (02)
  • [10] LAIF: A logging and interaction framework for gaze-based interfaces in virtual entertainment environments
    Nacke, Lennart E.
    Stellmach, Sophie
    Sasse, Dennis
    Niesenhaus, Joerg
    Dachselt, Raimund
    [J]. ENTERTAINMENT COMPUTING, 2011, 2 (04) : 265 - 273