Predicting user visual attention in virtual reality with a deep learning model

被引:0
|
作者
Xiangdong Li
Yifei Shan
Wenqian Chen
Yue Wu
Praben Hansen
Simon Perrault
机构
[1] Zhejiang University,College of Computer Science and Technology
[2] Stockholm University,Department of Computer Science and Systems
[3] ISTD,undefined
[4] Singapore University of Technology and Design,undefined
来源
Virtual Reality | 2021年 / 25卷
关键词
Visual attention; Virtual reality; Deep learning model; Eye tracking;
D O I
暂无
中图分类号
学科分类号
摘要
Recent studies show that user’s visual attention during virtual reality museum navigation can be effectively estimated with deep learning models. However, these models rely on large-scale datasets that usually are of high structure complexity and context specific, which is challenging for nonspecialist researchers and designers. Therefore, we present the deep learning model, ALRF, to generalise on real-time user visual attention prediction in virtual reality context. The model combines two parallel deep learning streams to process the compact dataset of temporal–spatial salient features of user’s eye movements and virtual object coordinates. The prediction accuracy outperformed the state-of-the-art deep learning models by reaching record high 91.03%. Importantly, with quick parametric tuning, the model showed flexible applicability across different environments of the virtual reality museum and outdoor scenes. Implications for how the proposed model may be implemented as a generalising tool for adaptive virtual reality application design and evaluation are discussed.
引用
收藏
页码:1123 / 1136
页数:13
相关论文
共 50 条
  • [41] Predicting Single Neuron Responses of the Primary Visual Cortex with Deep Learning Model
    Deng, Kaiwen
    Schwendeman, Peter S.
    Guan, Yuanfang
    [J]. ADVANCED SCIENCE, 2024, 11 (15)
  • [42] Presence Effects in Virtual Reality Based on User Characteristics: Attention, Enjoyment, and Memory
    Kim, Si Jung
    Laine, Teemu H.
    Suk, Hae Jung
    [J]. ELECTRONICS, 2021, 10 (09)
  • [43] Improving Automated Visual Fault Detection by Combining a Biologically Plausible Model of Visual Attention with Deep Learning
    Beuth, Frederik
    Schlosser, Tobias
    Friedrich, Michael
    Kowerko, Danny
    [J]. IECON 2020: THE 46TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, 2020, : 5323 - 5330
  • [44] Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning
    Li, Yunqin
    Yabuki, Nobuyoshi
    Fukuda, Tomohiro
    [J]. SUSTAINABLE CITIES AND SOCIETY, 2022, 86
  • [45] Measuring visual walkability perception using panoramic street view images, virtual reality, and deep learning
    Li, Yunqin
    Yabuki, Nobuyoshi
    Fukuda, Tomohiro
    [J]. Sustainable Cities and Society, 2022, 86
  • [46] Design of virtual reality augmented reality mobile platform and game user behavior monitoring using deep learning (Publication with Expression of Concern)
    Zhang, GuoLong
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL ENGINEERING EDUCATION, 2020, 60 (2_suppl) : 205 - 221
  • [47] Analysis of Unsatisfying User Experiences and Unmet Psychological Needs for Virtual Reality Exergames Using Deep Learning Approach
    Zhang, Xiaoyan
    Yan, Qiang
    Zhou, Simin
    Ma, Linye
    Wang, Siran
    [J]. INFORMATION, 2021, 12 (11)
  • [48] DeepVS2.0: A Saliency-Structured Deep Learning Method for Predicting Dynamic Visual Attention
    Jiang, Lai
    Xu, Mai
    Wang, Zulin
    Sigal, Leonid
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (01) : 203 - 224
  • [49] DeepVS2.0: A Saliency-Structured Deep Learning Method for Predicting Dynamic Visual Attention
    Lai Jiang
    Mai Xu
    Zulin Wang
    Leonid Sigal
    [J]. International Journal of Computer Vision, 2021, 129 : 203 - 224
  • [50] Production Model of Virtual Reality Learning Environments
    Ortiz Aguinaga, Gerardo
    Cardona Reyes, Hector
    Guzman Mendoza, Jose Eder
    Munoz Artega, Jaime
    [J]. CISETC 2019: INTERNATIONAL CONGRESS ON EDUCATION AND TECHNOLOGY IN SCIENCES, 2019, 2555 : 319 - 328