Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering

被引:20
|
作者
Krajancich, Brooke [1 ]
Kellnhofer, Petr [1 ,2 ]
Wetzstein, Gordon [1 ]
机构
[1] Stanford Univ, Stanford, CA 94305 USA
[2] Raxium, Fremont, CA USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2020年 / 39卷 / 06期
关键词
applied perception; rendering; virtual reality; augmented reality; DISPLAY; CUE;
D O I
10.1145/3414685.3417820
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Virtual and augmented reality (VR/AR) displays crucially rely on stereoscopic rendering to enable perceptually realistic user experiences. Yet, existing near-eye display systems ignore the gaze-dependent shift of the no-parallax point in the human eye. Here, we introduce a gaze-contingent stereo rendering technique that models this effect and conduct several user studies to validate its effectiveness. Our findings include experimental validation of the location of the no-parallax point, which we then use to demonstrate significant improvements of disparity and shape distortion in a VR setting, and consistent alignment of physical and digitally rendered objects across depths in optical see-through AR. Our work shows that gaze-contingent stereo rendering improves perceptual realism and depth perception of emerging wearable computing systems.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Gaze-Contingent Rendering in Virtual Reality
    Zhu, Fang
    Lu, Ping
    Li, Pin
    Sheng, Bin
    Mao, Lijuan
    [J]. ADVANCES IN COMPUTER GRAPHICS, CGI 2020, 2020, 12221 : 16 - 23
  • [2] Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
    Konrad, Robert
    Angelopoulos, Anastasios
    Wetzstein, Gordon
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (02):
  • [3] Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
    Konrad, Robert
    Angelopoulos, Anastasios
    Wetzstein, Gordon
    [J]. SIGGRAPH '19 -ACM SIGGRAPH 2019 TALKS, 2019,
  • [4] Depth Perception with Gaze-contingent Depth of Field
    Mauderer, Michael
    Conte, Simone
    Nacenta, Miguel A.
    Vishwanath, Dhanraj
    [J]. 32ND ANNUAL ACM CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2014), 2014, : 217 - 226
  • [5] Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays
    Padmanaban, Nitish
    Konrad, Robert
    Stramer, Tal
    Cooper, Emily A.
    Wetzstein, Gordon
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (09) : 2183 - 2188
  • [6] Using Gaze-contingent Depth of Field to Facilitate Depth Perception
    Mauderer, M.
    Conte, S. I.
    Nacenta, M. A.
    Vishwanath, D.
    [J]. I-PERCEPTION, 2014, 5 (05):
  • [7] Gaze-Contingent Manipulation of Color Perception
    Mauderer, Michael
    Flatla, David R.
    Nacenta, Miguel A.
    [J]. 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 5191 - 5202
  • [8] Gaze-Contingent Auditory Displays for Improved Spatial Attention in Virtual Reality
    Vinnikov, Margarita
    Allison, Robert S.
    Fernandes, Suzette
    [J]. ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2017, 24 (03)
  • [9] Virtual reality boxing: Gaze-contingent manipulation of stimulus properties using blur
    Limballe, Annabelle
    Kulpa, Richard
    Vu, Alexandre
    Mavromatis, Mae
    Bennett, Simon J.
    [J]. FRONTIERS IN PSYCHOLOGY, 2022, 13
  • [10] Saccade Landing Position Prediction for Gaze-Contingent Rendering
    Arabadzhiyska, Elena
    Tursun, Okan Tarhan
    Myszkowski, Karol
    Seidel, Hans-Peter
    Didyk, Piotr
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):