PENet: Towards Precise and Efficient Image Guided Depth Completion

被引:144
|
作者
Hu, Mu [1 ]
Wang, Shuling [1 ]
Li, Bin [1 ]
Ning, Shiyu [2 ]
Fan, Li [2 ]
Gong, Xiaojin [1 ]
机构
[1] Zhejiang Univ, Coll Informat Sci & Elect Engn, Hangzhou, Peoples R China
[2] Hisilicon, Huawei Shanghai, Dept Turing Solut, Shanghai, Peoples R China
关键词
D O I
10.1109/ICRA48506.2021.9561035
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image guided depth completion is the task of generating a dense depth map from a sparse depth map and a high quality image. In this task, how to fuse the color and depth modalities plays an important role in achieving good performance. This paper proposes a two-branch backbone that consists of a color-dominant branch and a depth-dominant branch to exploit and fuse two modalities thoroughly. More specifically, one branch inputs a color image and a sparse depth map to predict a dense depth map. The other branch takes as inputs the sparse depth map and the previously predicted depth map, and outputs a dense depth map as well. The depth maps predicted from two branches are complimentary to each other and therefore they are adaptively fused. In addition, we also propose a simple geometric convolutional layer to encode 3D geometric cues. The geometric encoded backbone conducts the fusion of different modalities at multiple stages, leading to good depth completion results. We further implement a dilated and accelerated CSPN++ to refine the fused depth map efficiently. The proposed full model ranks 1st in the KITTI depth completion online leaderboard at the time of submission. It also infers much faster than most of the top ranked methods. The code of this work is available at https://github.com/JUGGHM/PENet.ICRA2021.
引用
收藏
页码:13656 / 13662
页数:7
相关论文
共 50 条
  • [41] RSDCN: A Road Semantic Guided Sparse Depth Completion Network
    Nan Zou
    Zhiyu Xiang
    Yiman Chen
    Neural Processing Letters, 2020, 51 : 2737 - 2749
  • [42] Exemplar-based image completion using image depth information
    Xiao, Mang
    Li, Guangyao
    Xie, Li
    Peng, Lei
    Chen, Qiaochuan
    PLOS ONE, 2018, 13 (09):
  • [43] RSDCN: A Road Semantic Guided Sparse Depth Completion Network
    Zou, Nan
    Xiang, Zhiyu
    Chen, Yiman
    NEURAL PROCESSING LETTERS, 2020, 51 (03) : 2737 - 2749
  • [44] 3D CLUES GUIDED CONVOLUTION FOR DEPTH COMPLETION
    Yang, Shuwen
    Fu, Zhichao
    Wu, Xingjiao
    Du, Xiangcheng
    Ma, Tianlong
    He, Liang
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2132 - 2136
  • [45] Depth-guided deep filtering network for efficient single image bokeh rendering
    Chen, Quan
    Zheng, Bolun
    Zhou, Xiaofei
    Huang, Aiai
    Sun, Yaoqi
    Chen, Chuqiao
    Yan, Chenggang
    Yuan, Shanxin
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (28): : 20869 - 20887
  • [46] Depth-guided deep filtering network for efficient single image bokeh rendering
    Quan Chen
    Bolun Zheng
    Xiaofei Zhou
    Aiai Huang
    Yaoqi Sun
    Chuqiao Chen
    Chenggang Yan
    Shanxin Yuan
    Neural Computing and Applications, 2023, 35 : 20869 - 20887
  • [47] Semantic Scene Completion from a Single Depth Image
    Song, Shuran
    Yu, Fisher
    Zeng, Andy
    Chang, Angel X.
    Savva, Manolis
    Funkhouser, Thomas
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 190 - 198
  • [48] DEPTH-BASED IMAGE COMPLETION FOR VIEW SYNTHESIS
    Gautier, Josselin
    Le Meur, Olivier
    Guillemot, Christine
    2011 3DTV CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON), 2011,
  • [49] DenseLiDAR: A Real-Time Pseudo Dense Depth Guided Depth Completion Network
    Gu, Jiaqi
    Xiang, Zhiyu
    Ye, Yuwen
    Wang, Lingxuan
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1808 - 1815
  • [50] Towards efficient mobile image-guided navigation through removal of outliers
    Ekaterina Sirazitdinova
    Stephan M. Jonas
    Jan Lensen
    Deyvid Kochanov
    Richard Houben
    Hans Slijp
    Thomas M. Deserno
    EURASIP Journal on Image and Video Processing, 2016