Stereo-augmented Depth Completion from a Single RGB-LiDAR image

被引:6
|
作者
Choi, Keunhoon [1 ]
Jeong, Somi [1 ]
Kim, Youngjung [2 ]
Sohn, Kwanghoon [1 ]
机构
[1] Yonsei Univ, Sch Elect & Elect Engn, Seoul 03722, South Korea
[2] Agcy Def Dev ADD, Daejeon 34060, South Korea
关键词
D O I
10.1109/ICRA48506.2021.9561557
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Depth completion is an important task in computer vision and robotics applications, which aims at predicting accurate dense depth from a single RGB-LiDAR image. Convolutional neural networks (CNNs) have been widely used for depth completion to learn a mapping function from sparse to dense depth. However, recent methods do not exploit any 3D geometric cues during the inference stage and mainly rely on sophisticated CNN architectures. In this paper, we present a cascade and geometrically inspired learning framework for depth completion, consisting of three stages: view extrapolation, stereo matching, and depth refinement. The first stage extrapolates a virtual (right) view using a single RGB (left) and its LiDAR data. We then mimic the binocular stereo-matching, and as a result, explicitly encode geometric constraints during depth completion. This stage augments the final refinement process by providing additional geometric reasoning. We also introduce a distillation framework based on teacher-student strategy to effectively train our network. Knowledge from a teacher model privileged with real stereo pairs is transferred to the student through feature distillation. Experimental results on KITTI depth completion benchmark demonstrate that the proposed method is superior to state-of-the-art methods.
引用
收藏
页码:13641 / 13647
页数:7
相关论文
共 50 条
  • [1] Linear Inverse Problem for Depth Completion with RGB Image and Sparse LIDAR Fusion
    Fu, Chen
    Mertz, Christoph
    Dolan, John M.
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 14127 - 14133
  • [2] Deep Depth Completion of a Single RGB-D Image
    Zhang, Yinda
    Funkhouser, Thomas
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 175 - 185
  • [3] SLFNet: A Stereo and LiDAR Fusion Network for Depth Completion
    Zhang, Yongjian
    Wang, Longguang
    Li, Kunhong
    Fu, Zhiheng
    Guo, Yulan
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) : 10605 - 10612
  • [4] Counterfactual Depth from a Single RGB Image
    Issaranon, Theerasit
    Zou, Chuhang
    Forsyth, David
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 2129 - 2138
  • [5] CostDCNet: Cost Volume Based Depth Completion for a Single RGB-D Image
    Kam, Jaewon
    Kim, Jungeon
    Kim, Soongjin
    Park, Jaesik
    Lee, Seungyong
    [J]. COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 257 - 274
  • [6] NNNet: New Normal Guided Depth Completion From Sparse LiDAR Data and Single Color Image
    Liu, Jiade
    Jung, Cheolkon
    [J]. IEEE ACCESS, 2022, 10 : 114252 - 114261
  • [7] Multi-scale features fusion from sparse LiDAR data and single image for depth completion
    Wang, Benzhang
    Feng, Yiliu
    Liu, Hengzhu
    [J]. ELECTRONICS LETTERS, 2018, 54 (24) : 1375 - 1376
  • [8] Semantic Scene Completion from a Single Depth Image
    Song, Shuran
    Yu, Fisher
    Zeng, Andy
    Chang, Angel X.
    Savva, Manolis
    Funkhouser, Thomas
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 190 - 198
  • [9] FloW Vision: Depth Image Enhancement by Combining Stereo RGB-Depth Sensor
    Waskitho, Suryo Aji
    Alfarouq, Ardiansyah
    Sukaridhoto, Sritrusta
    Pramadihanto, Dadet
    [J]. 2016 INTERNATIONAL CONFERENCE ON KNOWLEDGE CREATION AND INTELLIGENT COMPUTING (KCIC), 2016, : 182 - 187
  • [10] Generating High Resolution Depth Image from Low Resolution LiDAR Data using RGB Image
    Yamakawa, Kento
    Sakaue, Fumihiko
    Sato, Jun
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, : 659 - 665