Predicting user visual attention in virtual reality with a deep learning model

被引:0
|
作者
Xiangdong Li
Yifei Shan
Wenqian Chen
Yue Wu
Praben Hansen
Simon Perrault
机构
[1] Zhejiang University,College of Computer Science and Technology
[2] Stockholm University,Department of Computer Science and Systems
[3] ISTD,undefined
[4] Singapore University of Technology and Design,undefined
来源
Virtual Reality | 2021年 / 25卷
关键词
Visual attention; Virtual reality; Deep learning model; Eye tracking;
D O I
暂无
中图分类号
学科分类号
摘要
Recent studies show that user’s visual attention during virtual reality museum navigation can be effectively estimated with deep learning models. However, these models rely on large-scale datasets that usually are of high structure complexity and context specific, which is challenging for nonspecialist researchers and designers. Therefore, we present the deep learning model, ALRF, to generalise on real-time user visual attention prediction in virtual reality context. The model combines two parallel deep learning streams to process the compact dataset of temporal–spatial salient features of user’s eye movements and virtual object coordinates. The prediction accuracy outperformed the state-of-the-art deep learning models by reaching record high 91.03%. Importantly, with quick parametric tuning, the model showed flexible applicability across different environments of the virtual reality museum and outdoor scenes. Implications for how the proposed model may be implemented as a generalising tool for adaptive virtual reality application design and evaluation are discussed.
引用
收藏
页码:1123 / 1136
页数:13
相关论文
共 50 条
  • [31] Vehicle license plate recognition using visual attention model and deep learning
    Zang, Di
    Chai, Zhenliang
    Zhang, Junqi
    Zhang, Dongdong
    Cheng, Jiujun
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2015, 24 (03)
  • [32] Impact of Constant Visual Biofeedback on User Experience in Virtual Reality Exergames
    Kojic, Tanja
    Lan Thao Nugyen
    Voigt-Antons, Jan-Niklas
    [J]. 2019 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2019), 2019, : 307 - 310
  • [33] Definition of guidelines for virtual reality application design based on visual attention
    Baldoni, Sara
    assi, Mohamed Saifeddine Hadj S.
    Carli, Marco
    Battisti, Federica
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (16) : 49615 - 49640
  • [34] Definition of guidelines for virtual reality application design based on visual attention
    Sara Baldoni
    Mohamed Saifeddine Hadj Sassi
    Marco Carli
    Federica Battisti
    [J]. Multimedia Tools and Applications, 2024, 83 : 49615 - 49640
  • [35] VIVID: Virtual Environment for Visual Deep Learning
    Lai, Kuan-Ting
    Lin, Chia-Chih
    Kang, Chun-Yao
    Liao, Mei-Enn
    Chen, Ming-Syan
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1356 - 1359
  • [36] User Visual Attention Behavior Analysis and Experience Improvement in Virtual Meeting
    Ding Bohao
    Lyu Desheng
    [J]. 2021 IEEE 7TH INTERNATIONAL CONFERENCE ON VIRTUAL REALITY (ICVR 2021), 2021, : 269 - 278
  • [37] The Transmutation of Perception: Research of Attention and Visual Guidance in the Virtual Reality Context
    Tian, Yulin
    [J]. PROCEEDINGS OF THE 2018 ANNUAL SYMPOSIUM ON COMPUTER-HUMAN INTERACTION IN PLAY COMPANION EXTENDED ABSTRACTS (CHI PLAY 2018), 2018, : 91 - 94
  • [38] Cognitive Attention in Autism using Virtual Reality Learning Tool
    Vidhusha, S.
    Divya, B.
    Kavitha, A.
    Narayanan, Viswath R.
    Yaamini, D.
    [J]. PROCEEDINGS OF THE 2019 IEEE 18TH INTERNATIONAL CONFERENCE ON COGNITIVE INFORMATICS & COGNITIVE COMPUTING (ICCI*CC 2019), 2019, : 159 - 165
  • [39] Multimodal Deep Learning Model of Predicting Future Visual Field for Glaucoma Patients
    Pham, Quang T. M.
    Han, Jong Chul
    Park, Do Young
    Shin, Jitae
    [J]. IEEE ACCESS, 2023, 11 : 19049 - 19058
  • [40] Construction of a Virtual Reality Platform for UAV Deep Learning
    Wang, Shubo
    Chen, Jian
    Zhang, Zichao
    Wang, Guangqi
    Tan, Yu
    Zheng, Yongjun
    [J]. 2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 3912 - 3916