Consistent Depth Prediction for Transparent Object Reconstruction from RGB-D Camera

被引:0
|
作者
Cai, Yuxiang [1 ]
Zhu, Yifan [1 ]
Zhang, Haiwei [1 ]
Ren, Bo [1 ]
机构
[1] Nankai Univ, Tianjin, Peoples R China
关键词
SLAM;
D O I
10.1109/ICCV51070.2023.00320
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transparent objects are commonly seen in indoor scenes but are hard to estimate. Currently, commercial depth cameras face difficulties in estimating the depth of transparent objects due to the light reflection and refraction on their surface. As a result, they tend to make a noisy and incorrect depth value for transparent objects. These incorrect depth data make the traditional RGB-D SLAM method fails in reconstructing the scenes that contain transparent objects. An exact depth value of the transparent object is required to restore in advance and it is essential that the depth value of the transparent object must keep consistent in different views, or the reconstruction result will be distorted. Previous depth prediction methods of transparent objects can restore these missing depth values but none of them can provide a good result in reconstruction due to the inconsistency prediction. In this work, we propose a real-time reconstruction method using a novel stereo-based depth prediction network to keep the consistency of depth prediction in a sequence of images. Because there is no video dataset about transparent objects currently to train our model, we construct a synthetic RGB-D video dataset with different transparent objects. Moreover, to test generalization capability, we capture video from real scenes using the RealSense D435i RGB-D camera. We compare the metrics on our dataset and SLAM reconstruction results in both synthetic scenes and real scenes with the previous methods. Experiments show our significant improvement in accuracy on depth prediction and scene reconstruction.
引用
收藏
页码:3436 / 3445
页数:10
相关论文
共 50 条
  • [41] Robust Object Recognition Under Partial Occlusions Using an RGB-D Camera
    Yoo, Yong-Ho
    Kim, Jong-Hwan
    ROBOT INTELLIGENCE TECHNOLOGY ANDAPPLICATIONS 3, 2015, 345 : 647 - 654
  • [42] Sensor Data Fusion of LIDAR with Stereo RGB-D Camera for Object Tracking
    Dieterle, Thomas
    Particke, Florian
    Patino-Studencki, Lucila
    Thielecke, Joern
    2017 IEEE SENSORS, 2017, : 1173 - 1175
  • [43] RGB-D Camera-based Object Grounding Surface Estimation System
    Natori, Natsuki
    Mikuriya, Masayuki
    Nakayama, Yu
    Ogino, Fumitoshi
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 586 - 589
  • [44] Moving Object Detection Using Adaptive Blind Update and RGB-D Camera
    Dorudian, Navid
    Lauria, Stanislao
    Swift, Stephen
    IEEE SENSORS JOURNAL, 2019, 19 (18) : 8191 - 8201
  • [45] Human Object Recognition Using Colour and Depth Information from an RGB-D Kinect Sensor
    Southwell, Benjamin John
    Fang, Gu
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2013, 10
  • [46] A Flexible Scene Representation for 3D Reconstruction Using an RGB-D Camera
    Thomas, Diego
    Sugimoto, Akihiro
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 2800 - 2807
  • [47] An RGB-D Descriptor for Object Classification
    Arican, Erkut
    Aydin, Tarkan
    ROMANIAN JOURNAL OF INFORMATION SCIENCE AND TECHNOLOGY, 2022, 25 (3-4): : 338 - 349
  • [48] HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness
    Wu, Zongwei
    Allibert, Guillaume
    Meriaudeau, Fabrice
    Ma, Chao
    Demonceaux, Cedric
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2160 - 2173
  • [49] Depth-aware lightweight network for RGB-D salient object detection
    Ling, Liuyi
    Wang, Yiwen
    Wang, Chengjun
    Xu, Shanyong
    Huang, Yourui
    IET IMAGE PROCESSING, 2023, 17 (08) : 2350 - 2361
  • [50] Depth cue enhancement and guidance network for RGB-D salient object detection
    Li, Xiang
    Zhang, Qing
    Yan, Weiqi
    Dai, Meng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95