A Post-Rectification Approach of Depth Images of Kinect v2 for 3D Reconstruction of Indoor Scenes

被引:25
|
作者
Jiao, Jichao [1 ]
Yuan, Libin [1 ]
Tang, Weihua [2 ]
Deng, Zhongliang [1 ]
Wu, Qi [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Sch Elect Engn, Beijing 100876, Peoples R China
[2] China State Construct Engn Corp Ltd CSCEC, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
camera calibration; Kinect v2; reflectivity-related depth error; simultaneous localization and mapping (SLAM); time-of-flight; RGB-D CAMERA; CALIBRATION; SENSORS;
D O I
10.3390/ijgi6110349
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D reconstruction of indoor scenes is a hot research topic in computer vision. Reconstructing fast, low-cost, and accurate dense 3D maps of indoor scenes have applications in indoor robot positioning, navigation, and semantic mapping. In other studies, the Microsoft Kinect for Windows v2 (Kinect v2) is utilized to complete this task, however, the accuracy and precision of depth information and the accuracy of correspondence between the RGB and depth (RGB-D) images still remain to be improved. In this paper, we propose a post-rectification approach of the depth images to improve the accuracy and precision of depth information. Firstly, we calibrate the Kinect v2 with a planar checkerboard pattern. Secondly, we propose a post-rectification approach of the depth images according to the reflectivity-related depth error. Finally, we conduct tests to evaluate this post-rectification approach from the perspectives of accuracy and precision. In order to validate the effect of our post-rectification approach, we apply it to RGB-D simultaneous localization and mapping (SLAM) in an indoor environment. Experimental results show that once our post-rectification approach is employed, the RGB-D SLAM system can perform a more accurate and better visual effect 3D reconstruction of indoor scenes than other state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Depth Filtering in 3D Reconstruction of Indoor Scenes Based on Kinect
    Wu, Lei
    Chai, Senchun
    [J]. 2014 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2014), VOL 1, 2014, : 356 - 359
  • [2] Automatic 3D Reconstruction of Indoor Manhattan World Scenes Using Kinect Depth Data
    Wolters, Dominik
    [J]. PATTERN RECOGNITION, GCPR 2014, 2014, 8753 : 715 - 721
  • [3] Indoor 3D Path Planning Using a Kinect V2 Sensor
    Nie, Wen
    Li, QunMing
    Zhong, Guoliang
    Deng, Hua
    [J]. 2017 IEEE 3RD INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC), 2017, : 527 - 531
  • [4] A Study on the 3D Reconstruction Strategy of a Sheep Body Based on a Kinect v2 Depth Camera Array
    Liang, Jinxin
    Yuan, Zhiyu
    Luo, Xinhui
    Chen, Geng
    Wang, Chunxin
    [J]. ANIMALS, 2024, 14 (17):
  • [5] A METHOD FOR THE 3D RECONSTRUCTION OF INDOOR SCENES FROM MONOCULAR IMAGES
    OLIVIERI, P
    GATTI, M
    STRAFORINI, M
    TORRE, V
    [J]. LECTURE NOTES IN COMPUTER SCIENCE, 1992, 588 : 696 - 700
  • [6] 3D Reconstruction Method for Fruit Tree Branches Based on Kinect v2 Sensor
    Ren, Dongyu
    Li, Xiaojuan
    Lin, Tao
    Xiong, Mingming
    Xu, Zhenhui
    Cui, Gaojian
    [J]. Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2022, 53 : 197 - 203
  • [7] A 3D Compensation Method for the Systematic Errors of Kinect V2
    Li, Chang
    Li, Bingrui
    Zhao, Sisi
    [J]. REMOTE SENSING, 2021, 13 (22)
  • [8] The Integration of Images and Kinect Depth Maps for Better Quality of 3D Surface Reconstruction
    Cai, Jinhai
    [J]. 2014 13TH INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION ROBOTICS & VISION (ICARCV), 2014, : 223 - 227
  • [9] Reconstruction of 3D scenes using stereoimages without rectification
    Fursov V.A.
    Goshin E.V.
    Bibikov S.A.
    [J]. Pattern Recognition and Image Analysis, 2014, 24 (3) : 389 - 394
  • [10] Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision
    Wasenmueller, Oliver
    Stricker, Didier
    [J]. COMPUTER VISION - ACCV 2016 WORKSHOPS, PT II, 2017, 10117 : 34 - 45