Perceptual self-position estimation based on gaze tracking in virtual reality

被引:2
|
作者
Liu, Hongmei [1 ]
Qin, Huabiao [1 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou, Guangdong, Peoples R China
关键词
Gaze tracking; Depth perception; Stereo vision; Human-computer interaction; Visual discomfort; HEAD-MOUNTED DISPLAYS; DEPTH-PERCEPTION; DISTANCE; ENVIRONMENTS; PERFORMANCE;
D O I
10.1007/s10055-021-00553-y
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The depth perception of human visual system is divergent between virtual and real space; this depth discrepancy affects the spatial judgment of the user in a virtual space, which means the user cannot precisely locate their self-position in a virtual space. Existing localization methods ignore the depth discrepancy and only concentrate on increasing location accuracy in real space. Thus, the discrepancy always exists in virtual space, which induces visual discomfort. In this paper, a localization method based on depth perception is proposed to measure the self-position of the user in a virtual environment. Using binocular gaze tracking, this method estimates perceived depth and constructs an eye matrix by measuring gaze convergence on a target. Comparing the eye matrix and camera matrix, the method can automatically calculate the actual depth of the viewed target. Then, the difference between the actual depth and the perceived depth can be explicitly estimated without markers. The position of the virtual camera is compensated by the depth difference to obtain perceptual self-position. Furthermore, a virtual reality system is redesigned by adjusting the virtual camera position. The redesigned system makes users feel that the distance (from the user to an object) is the same in virtual and real space. Experimental results demonstrate that the redesigned system can improve the user's visual experiences, which validate the superiority of the proposed localization method.
引用
收藏
页码:269 / 278
页数:10
相关论文
共 50 条
  • [31] Image-based self-position and orientation method for moving platform
    DeRen Li
    Yong Liu
    XiuXiao Yuan
    Science China Information Sciences, 2013, 56 : 1 - 14
  • [32] Shape Recognition of Metallic Landmark and its Application to Self-Position Estimation for Mobile Robot
    Fujii, Hajime
    Ando, Yoshinobu
    Yoshimi, Takashi
    Mizukawa, Makoto
    JOURNAL OF ROBOTICS AND MECHATRONICS, 2010, 22 (06) : 718 - 725
  • [33] Image-based self-position and orientation method for moving platform
    Li DeRen
    Liu Yong
    Yuan XiuXiao
    SCIENCE CHINA-INFORMATION SCIENCES, 2013, 56 (04) : 1 - 14
  • [34] Gaze-based Kinaesthetic Interaction for Virtual Reality
    Li, Zhenxing
    Akkil, Deepak
    Raisamo, Roope
    INTERACTING WITH COMPUTERS, 2020, 32 (01) : 17 - 32
  • [35] Evaluation of Gaze Depth Estimation from Eye Tracking in Augmented Reality
    Oeney, Seyda Z.
    Rodrigues, Nils
    Becher, Michael
    Ertl, Thomas
    Reina, Guido
    Sedlmair, Michael
    Weiskopf, Daniel
    ETRA 2020 SHORT PAPERS: ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, 2020,
  • [36] Gaze Tracking for Eye-Hand Coordination Training Systems in Virtual Reality
    Mutasim, Aunnoy K.
    Stuerzlinger, Wolfgang
    Batmaz, Anil Ufuk
    CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2020,
  • [37] Gaze point estimation method based on ELM in gaze tracking system
    Zhu, Bo
    Zhang, Tian-Xia
    Zhang, T.-X. (txzhang@me.neu.edu.cn), 2013, Northeast University (34): : 335 - 338
  • [38] A Statistical Approach to Continuous Self-Calibrating Eye Gaze Tracking for Head-Mounted Virtual Reality Systems
    Tripathi, Subarna
    Guenter, Brian
    2017 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2017), 2017, : 862 - 870
  • [39] Mobile Augmented Reality: Placing Labels based on Gaze Position
    McNamara, Ann
    Kabeerdoss, Chethna
    ADJUNCT PROCEEDINGS OF THE 2016 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT), 2016, : 36 - 37
  • [40] Path generation in virtual reality environment based on gaze analysis
    Antonya, Csaba
    Barbuceanu, Florin Grigore
    Rusak, Zoltan
    IEEE AFRICON 2011, 2011,