Perceptual self-position estimation based on gaze tracking in virtual reality

被引:2
|
作者
Liu, Hongmei [1 ]
Qin, Huabiao [1 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou, Guangdong, Peoples R China
关键词
Gaze tracking; Depth perception; Stereo vision; Human-computer interaction; Visual discomfort; HEAD-MOUNTED DISPLAYS; DEPTH-PERCEPTION; DISTANCE; ENVIRONMENTS; PERFORMANCE;
D O I
10.1007/s10055-021-00553-y
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The depth perception of human visual system is divergent between virtual and real space; this depth discrepancy affects the spatial judgment of the user in a virtual space, which means the user cannot precisely locate their self-position in a virtual space. Existing localization methods ignore the depth discrepancy and only concentrate on increasing location accuracy in real space. Thus, the discrepancy always exists in virtual space, which induces visual discomfort. In this paper, a localization method based on depth perception is proposed to measure the self-position of the user in a virtual environment. Using binocular gaze tracking, this method estimates perceived depth and constructs an eye matrix by measuring gaze convergence on a target. Comparing the eye matrix and camera matrix, the method can automatically calculate the actual depth of the viewed target. Then, the difference between the actual depth and the perceived depth can be explicitly estimated without markers. The position of the virtual camera is compensated by the depth difference to obtain perceptual self-position. Furthermore, a virtual reality system is redesigned by adjusting the virtual camera position. The redesigned system makes users feel that the distance (from the user to an object) is the same in virtual and real space. Experimental results demonstrate that the redesigned system can improve the user's visual experiences, which validate the superiority of the proposed localization method.
引用
收藏
页码:269 / 278
页数:10
相关论文
共 50 条
  • [1] Perceptual self-position estimation based on gaze tracking in virtual reality
    Hongmei Liu
    Huabiao Qin
    Virtual Reality, 2022, 26 : 269 - 278
  • [2] Self-position awareness-based presence and interaction in virtual reality
    Xia, Zhenping
    Hwang, Alex
    VIRTUAL REALITY, 2020, 24 (02) : 255 - 262
  • [3] Self-position awareness-based presence and interaction in virtual reality
    Zhenping Xia
    Alex Hwang
    Virtual Reality, 2020, 24 : 255 - 262
  • [4] Self-Position Estimation based on Road Sign using Augmented Reality Technology
    Aoki, Rio
    Tanaka, Hiroyuki
    Izumi, Kiyotaka
    Tsujimura, Takeshi
    PROCEEDINGS 2018 12TH FRANCE-JAPAN AND 10TH EUROPE-ASIA CONGRESS ON MECHATRONICS, 2018, : 39 - 42
  • [5] Object Recognition based Self-Position Estimation for Underwater Robots
    Tamura, Yuma
    Katayama, Takafumi
    Song, Tian
    Shimamoto, Takashi
    2022 OCEANS HAMPTON ROADS, 2022,
  • [6] Self-position estimation of an autonomous mobile robot with variable processing time
    Aichi Institute of Technology, 1247 Yachigusa Yakusacho, Toyota, Aichi 470-0392, Japan
    不详
    不详
    IEEJ Trans. Electron. Inf. Syst., 6 (976-985+21):
  • [7] Image Feature Significance for Self-position Estimation with Variable Processing Time
    Doki, Kae
    Tanabe, Manabu
    Torii, Akihiro
    Ueda, Akiteru
    ARTIFICIAL NEURAL NETWORKS AND INTELLIGENT INFORMATION PROCESSING, PROCEEDINGS, 2009, : 134 - 142
  • [8] Self-Position Estimation of an Autonomous Mobile Robot with Variable Processing Time
    Doki, Kae
    Isetani, Naohiro
    Torii, Akihiro
    Ueda, Akiteru
    Tsutsumi, Hirotsugu
    ELECTRONICS AND COMMUNICATIONS IN JAPAN, 2010, 93 (11) : 46 - 58
  • [9] SELF-POSITION ESTIMATION OF AUTONOMOUS MOBILE ROBOT THAT USES METALLIC LANDMARK
    Fujii, Hajime
    Ando, Yoshinobu
    Yoshimi, Takashi
    Mizukawa, Makoto
    EMERGING TRENDS IN MOBILE ROBOTICS, 2010, : 1129 - 1136
  • [10] Self-position estimation using terrain shadows for precise planetary landing
    Kuga, Tomoki
    Kojima, Hirohisa
    ACTA ASTRONAUTICA, 2018, 148 : 345 - 354