Consistent Depth Prediction for Transparent Object Reconstruction from RGB-D Camera

被引:0
|
作者
Cai, Yuxiang [1 ]
Zhu, Yifan [1 ]
Zhang, Haiwei [1 ]
Ren, Bo [1 ]
机构
[1] Nankai Univ, Tianjin, Peoples R China
关键词
SLAM;
D O I
10.1109/ICCV51070.2023.00320
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transparent objects are commonly seen in indoor scenes but are hard to estimate. Currently, commercial depth cameras face difficulties in estimating the depth of transparent objects due to the light reflection and refraction on their surface. As a result, they tend to make a noisy and incorrect depth value for transparent objects. These incorrect depth data make the traditional RGB-D SLAM method fails in reconstructing the scenes that contain transparent objects. An exact depth value of the transparent object is required to restore in advance and it is essential that the depth value of the transparent object must keep consistent in different views, or the reconstruction result will be distorted. Previous depth prediction methods of transparent objects can restore these missing depth values but none of them can provide a good result in reconstruction due to the inconsistency prediction. In this work, we propose a real-time reconstruction method using a novel stereo-based depth prediction network to keep the consistency of depth prediction in a sequence of images. Because there is no video dataset about transparent objects currently to train our model, we construct a synthetic RGB-D video dataset with different transparent objects. Moreover, to test generalization capability, we capture video from real scenes using the RealSense D435i RGB-D camera. We compare the metrics on our dataset and SLAM reconstruction results in both synthetic scenes and real scenes with the previous methods. Experiments show our significant improvement in accuracy on depth prediction and scene reconstruction.
引用
下载
收藏
页码:3436 / 3445
页数:10
相关论文
共 50 条
  • [1] Transparent object detection and location based on RGB-D camera
    Chen Guo-Hua
    Wang Jun-Yi
    Zhang Ai-Jun
    16TH INTERNATIONAL CONFERENCE ON METROLOGY AND PROPERTIES OF ENGINEERING SURFACES (MET AND PROPS 2017), 2019, 1183
  • [2] Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor
    Ji, Yijun
    Xia, Qing
    Zhang, Zhijiang
    INTERNATIONAL JOURNAL OF OPTICS, 2017, 2017
  • [3] Robust Object Tracking based on RGB-D Camera
    Qi, Wenjing
    Yang, Yinfei
    Yi, Meng
    Li, Yunfeng
    Pizlo, Zygmunt
    Latecki, Longin Jan
    2014 11TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2014, : 2873 - 2878
  • [4] Wound detection and reconstruction using RGB-D camera
    Filko, Damir
    Nyarko, Emmanuel Karlo
    Cupec, Robert
    2016 39TH INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2016, : 1217 - 1222
  • [5] Robust 3D Reconstruction With an RGB-D Camera
    Wang, Kangkan
    Zhang, Guofeng
    Bao, Hujun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (11) : 4893 - 4906
  • [6] Fast Motion Object Detection Algorithm Using Complementary Depth Image on an RGB-D Camera
    Sun, Chi-Chia
    Wang, Yi-Hua
    Sheu, Ming-Hwa
    IEEE SENSORS JOURNAL, 2017, 17 (17) : 5728 - 5734
  • [7] Hand Position Tracking Using a Depth Image from a RGB-d Camera
    Marino Lizarazo, Daniel Leonardo
    Tumialan Borja, Jose Antonio
    2015 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2015, : 1680 - 1687
  • [8] LARGE-AREA DEPTH RECOVERY FOR RGB-D CAMERA
    Yan, Zengqiang
    Yu, Li
    Xiong, Zixiang
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 1409 - 1413
  • [9] Robust Multiple Object Tracking in RGB-D Camera Networks
    Zhao, Yongheng
    Carraro, Marco
    Munaro, Matteo
    Menegatti, Emanuele
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 6625 - 6632
  • [10] Geometry-Aware ICP for Scene Reconstruction from RGB-D Camera
    Bo Ren
    Jia-Cheng Wu
    Ya-Lei Lv
    Ming-Ming Cheng
    Shao-Ping Lu
    Journal of Computer Science and Technology, 2019, 34 : 581 - 593