Predicting user visual attention in virtual reality with a deep learning model

被引:0
|
作者
Xiangdong Li
Yifei Shan
Wenqian Chen
Yue Wu
Praben Hansen
Simon Perrault
机构
[1] Zhejiang University,College of Computer Science and Technology
[2] Stockholm University,Department of Computer Science and Systems
[3] ISTD,undefined
[4] Singapore University of Technology and Design,undefined
来源
Virtual Reality | 2021年 / 25卷
关键词
Visual attention; Virtual reality; Deep learning model; Eye tracking;
D O I
暂无
中图分类号
学科分类号
摘要
Recent studies show that user’s visual attention during virtual reality museum navigation can be effectively estimated with deep learning models. However, these models rely on large-scale datasets that usually are of high structure complexity and context specific, which is challenging for nonspecialist researchers and designers. Therefore, we present the deep learning model, ALRF, to generalise on real-time user visual attention prediction in virtual reality context. The model combines two parallel deep learning streams to process the compact dataset of temporal–spatial salient features of user’s eye movements and virtual object coordinates. The prediction accuracy outperformed the state-of-the-art deep learning models by reaching record high 91.03%. Importantly, with quick parametric tuning, the model showed flexible applicability across different environments of the virtual reality museum and outdoor scenes. Implications for how the proposed model may be implemented as a generalising tool for adaptive virtual reality application design and evaluation are discussed.
引用
收藏
页码:1123 / 1136
页数:13
相关论文
共 50 条
  • [21] Virtual Reality in Metaverse over Wireless Networks with User-centered Deep Reinforcement Learning
    Yu, Wenhan
    Chua, Terence Jie
    Zhao, Jun
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6639 - 6644
  • [22] A Visual Saliency Prediction Model Based on Emotional Attention and Deep Learning
    Yan, Fei
    Xiao, Ruoxiu
    Xiao, Peng
    Zhang, Jiaqi
    Chen, Cheng
    Wang, Zhiliang
    BASIC & CLINICAL PHARMACOLOGY & TOXICOLOGY, 2020, 127 : 89 - 90
  • [23] Effects of a Virtual Human Appearance Fidelity Continuum on Visual Attention in Virtual Reality
    Volonte, Matias
    Duchowski, Andrew T.
    Babu, Sabarish, V
    PROCEEDINGS OF THE 19TH ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS (IVA' 19), 2019, : 141 - 147
  • [24] [DC] User exploratory learning in a Virtual Reality museum
    Wang, Xueqi
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024, 2024, : 1152 - 1153
  • [25] Assessing User Experiences in Virtual Reality Learning Environments
    Li, Xiangming
    Wang, Ke
    Wang, Yincheng
    He, Jibo
    Zhang, Jingshun
    ASIA-PACIFIC EDUCATION RESEARCHER, 2024, 33 (05): : 1149 - 1160
  • [26] Virtual reality interaction based on visual attention and kinesthetic information
    Ying Fang
    Qian Liu
    Yiwen Xu
    Yanmin Guo
    Tiesong Zhao
    Virtual Reality, 2023, 27 : 2183 - 2193
  • [27] Virtual reality interaction based on visual attention and kinesthetic information
    Fang, Ying
    Liu, Qian
    Xu, Yiwen
    Guo, Yanmin
    Zhao, Tiesong
    VIRTUAL REALITY, 2023, 27 (03) : 2183 - 2193
  • [28] Coordinating Attention and Cooperation in Multi-user Virtual Reality Narratives
    Brown, Cullen
    Bhutra, Ghanshyam
    Suhail, Mohamed
    Xu, Qinghong
    Ragan, Eric D.
    2017 IEEE VIRTUAL REALITY (VR), 2017, : 377 - 378
  • [29] Integrating deep learning model and virtual reality technology for motion prediction in emergencies
    Meng, Li
    Pan, Fanfan
    Yan, Zhang
    Tao, Chen
    Hao, Du
    SAFETY SCIENCE, 2025, 183
  • [30] Learning with simulated virtual classmates: Effects of social-related configurations on students' visual attention and learning experiences in an immersive virtual reality classroom
    Hasenbein, Lisa
    Stark, Philipp
    Trautwein, Ulrich
    Queiroz, Anna Carolina Muller
    Bailenson, Jeremy
    Hahn, Jens-Uwe
    Goellner, Richard
    COMPUTERS IN HUMAN BEHAVIOR, 2022, 133