Fast Depth Densification for Occlusion-aware Augmented Reality

被引:2
|
作者
Holynski, Aleksander [1 ]
Kopf, Johannes [2 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Facebook, Cambridge, MA USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2018年 / 37卷 / 06期
关键词
Augmented Reality; 3D Reconstruction; Video Analysis; Depth Estimation; Simultaneous Localization and Mapping;
D O I
暂无
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Current AR systems only track sparse geometric features but do not compute depth for all pixels. For this reason, most AR effects are pure overlays that can never be occluded by real objects. We present a novel algorithm that propagates sparse depth to every pixel in near realtime. The produced depth maps are spatio-temporally smooth but exhibit sharp discontinuities at depth edges. This enables AR effects that can fully interact with and be occluded by the real scene. Our algorithm uses a video and a sparse SLAM reconstruction as input. It starts by estimating soft depth edges from the gradient of optical flow fields. Because optical flow is unreliable near occlusions we compute forward and backward flow fields and fuse the resulting depth edges using a novel reliability measure. We then localize the depth edges by thinning and aligning them with image edges. Finally, we optimize the propagated depth smoothly but encourage discontinuities at the recovered depth edges. We present results for numerous real-world examples and demonstrate the effectiveness for several occlusion-aware AR video effects. To quantitatively evaluate our algorithm we characterize the properties that make depth maps desirable for AR applications, and present novel evaluation metrics that capture how well these are satisfied. Our results compare favorably to a set of competitive baseline algorithms in this context.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] OccCasNet: Occlusion-Aware Cascade Cost Volume for Light Field Depth Estimation
    Chao, Wentao
    Duan, Fuqing
    Wang, Xuechun
    Wang, Yingqian
    Lu, Ke
    Wang, Guanghui
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2024, 10 : 1680 - 1691
  • [22] Accurate Depth and Normal Maps from Occlusion-Aware Focal Stack Symmetry
    Strecke, Michael
    Alperovich, Anna
    Goldluecke, Bastian
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2529 - 2537
  • [23] Occlusion-Aware Motion Planning at Roundabouts
    Debada, Ezequiel
    Ung, Adeline
    Gillet, Denis
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2021, 6 (02): : 276 - 287
  • [24] An Occlusion-aware Feature for Range Images
    Quadros, A.
    Underwood, J. P.
    Douillard, B.
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 4428 - 4435
  • [25] Occlusion-aware Video Temporal Consistency
    Yao, Chun-Han
    Chang, Chia-Yang
    Chien, Shao-Yi
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 777 - 785
  • [26] Adversarial Occlusion-aware Face Detection
    Chen, Yujia
    Song, Lingxiao
    Hu, Yibo
    He, Ran
    2018 IEEE 9TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2018,
  • [27] Towards Occlusion-Aware Multifocal Displays
    Chang, Jen-Hao Rick
    Levin, Anat
    Kumar, B. V. K. Vijaya
    Sankaranarayanan, Aswin C.
    ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (04):
  • [28] Occlusion-aware optical flow estimation
    Ince, Serdar
    Konrad, Janusz
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2008, 17 (08) : 1443 - 1451
  • [29] Occlusion-Aware Unsupervised Learning of Depth From 4-D Light Fields
    Jin, Jing
    Hou, Junhui
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 2216 - 2228
  • [30] Occlusion-aware depth estimation for light field using multi-orientation EPIs
    Sheng, Hao
    Zhao, Pan
    Zhang, Shuo
    Zhang, Jun
    Yang, Da
    PATTERN RECOGNITION, 2018, 74 : 587 - 599