A real-time semi-dense depth-guided depth completion network

被引:1
|
作者
Xu, JieJie [1 ]
Zhu, Yisheng [1 ]
Wang, Wenqing [1 ]
Liu, Guangcan [2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Sch Automat, Nanjing 210044, Peoples R China
[2] Southeast Univ, Sch Automat, Nanjing 210018, Peoples R China
来源
VISUAL COMPUTER | 2024年 / 40卷 / 01期
关键词
Depth completion; Neural networks; Multi-modal fusion; SPARSE; RECONSTRUCTION; PROPAGATION;
D O I
10.1007/s00371-022-02767-w
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Depth completion, the task of predicting dense depth maps from given depth maps of sparse, is an important topic in computer vision. To cope with the task, both traditional image processing- based and data-driven deep learning-based algorithms have been established in the literature. In general, traditional algorithms, built upon non-learnable methods such as interpolation and custom kernels, can handle well flat regions but may blunt sharp edges. Deep learning-based algorithms, despite their strengths in many aspects, still have several limits, e.g., their performance depends heavily on the quality of the given sparse maps, and the dense maps they produced may contain artifacts and are often poor in terms of geometric consistency. To tackle these issues, in this work we propose a simple yet effective algorithm that aims to combine the strengths of both the traditional image processing techniques and the prevalent deep learning methods. Namely, given a sparse depth map, our algorithm first generates a semi-dense map and a 3D pose map using the adaptive densification module (ADM) and the coordinate projection module (CPM), respectively, and then input the obtained maps into a two-branch convolutional neural network so as to produce the final dense depth map. The proposed algorithm is evaluated on both challenging outdoor dataset: KITTI and indoor dataset: NYUv2, the experimental results show that our method performs better than some existing methods.
引用
收藏
页码:87 / 97
页数:11
相关论文
共 50 条
  • [31] Feature-based visual odometry prior for real-time semi-dense stereo SLAM
    Krombach, Nicola
    Droeschel, David
    Houben, Sebastian
    Behnke, Sven
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 109 : 38 - 58
  • [32] RigNet: Repetitive Image Guided Network for Depth Completion
    Yan, Zhiqiang
    Wang, Kun
    Li, Xiang
    Zhang, Zhenyu
    Li, Jun
    Yang, Jian
    COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 : 214 - 230
  • [33] DART: dense articulated real-time tracking with consumer depth cameras
    Schmidt, Tanner
    Newcombe, Richard
    Fox, Dieter
    AUTONOMOUS ROBOTS, 2015, 39 (03) : 239 - 258
  • [34] A miniature stereo vision machine for real-time dense depth mapping
    Jia, YD
    Xu, YH
    Liu, WC
    Yang, C
    Zhu, YW
    Zhang, XX
    An, LP
    COMPUTER VISION SYSTEMS, PROCEEDINGS, 2003, 2626 : 268 - 277
  • [35] DART: dense articulated real-time tracking with consumer depth cameras
    Tanner Schmidt
    Richard Newcombe
    Dieter Fox
    Autonomous Robots, 2015, 39 : 239 - 258
  • [36] Tracking Based Depth-guided Video Inpainting
    Hatheele, Saroj
    Zaveri, Mukesh A.
    2013 FOURTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), 2013,
  • [37] Depth-guided deep filtering network for efficient single image bokeh rendering
    Quan Chen
    Bolun Zheng
    Xiaofei Zhou
    Aiai Huang
    Yaoqi Sun
    Chuqiao Chen
    Chenggang Yan
    Shanxin Yuan
    Neural Computing and Applications, 2023, 35 : 20869 - 20887
  • [38] Real-Time Wide-Baseline Place Recognition Using Depth Completion
    Maffra, Fabiola
    Teixeira, Lucas
    Chen, Zetao
    Chli, Margarita
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (02): : 1525 - 1532
  • [39] DepthNet: Real-Time LiDAR Point Cloud Depth Completion for Autonomous Vehicles
    Bai, Lin
    Zhao, Yiming
    Elhousni, Mahdi
    Huang, Xinming
    IEEE ACCESS, 2020, 8 : 227825 - 227833
  • [40] Real-Time Dense Depth Estimation Using Semantically-Guided LIDAR Data Propagation and Motion Stereo
    Hirata, Atsuki
    Ishikawa, Ryoichi
    Roxas, Menandro
    Oishi, Takeshi
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04): : 3806 - 3811