Multi-view stereo for weakly textured indoor 3D reconstruction

被引:3
|
作者
Wang, Tao [1 ]
Gan, Vincent J. L. [1 ,2 ,3 ]
机构
[1] Natl Univ Singapore, Dept Built Environm, Singapore, Singapore
[2] Natl Univ Singapore, Ctr Digital Bldg Technol 5G, Singapore, Singapore
[3] Natl Univ Singapore, Dept Built Environm, Singapore 117566, Singapore
关键词
D O I
10.1111/mice.13149
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
A 3D reconstruction enables an effective geometric representation to support various applications. Recently, learning-based multi-view stereo (MVS) algorithms have emerged, replacing conventional hand-crafted features with convolutional neural network-encoded deep representation to reduce feature matching ambiguity, leading to a more complete scene recovery from imagery data. However, the state-of-the-art architectures are not designed for an indoor environment with abundant weakly textured or textureless objects. This paper proposes AttentionSPP-PatchmatchNet, a deep learning-based MVS algorithm designed for indoor 3D reconstruction. The algorithm integrates multi-scale feature sampling to produce global-context-aware feature maps and recalibrates the weight of essential features to tackle challenges posed by indoor environments. A new dataset designed exclusively for indoor environments is presented to verify the performance of the proposed network. Experimental results show that AttentionSPP-PatchmatchNet outperforms state-of-the-art algorithms with relative 132.87% and 163.55% improvements at the 10 and 2 mm threshold, respectively, making it suitable for accurate and complete indoor 3D reconstruction.
引用
收藏
页码:1469 / 1489
页数:21
相关论文
共 50 条
  • [1] Multi-view stereo reconstruction technique for weakly-textured surfaces
    Akutsu, K.
    Kanai, S.
    Date, H.
    Niina, Y.
    Honma, R.
    [J]. BRIDGE MAINTENANCE, SAFETY, MANAGEMENT, LIFE-CYCLE SUSTAINABILITY AND INNOVATIONS, 2021, : 2992 - 3000
  • [2] Multi-View Stereo 3D Edge Reconstruction
    Bignoli, Andrea
    Romanoni, Andrea
    Matteucci, Matteo
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 867 - 875
  • [3] Underwater 3D reconstruction based on multi-view stereo
    Gu, Feifei
    Zhao, Juan
    Xu, Pei
    Huang, Shulan
    Zhang, Gaopeng
    Song, Zhan
    [J]. OCEAN OPTICS AND INFORMATION TECHNOLOGY, 2018, 10850
  • [4] Enhancing 3D reconstruction of textureless indoor scenes with IndoReal multi-view stereo (MVS)
    Wang, Tao
    Gan, Vincent J. L.
    [J]. AUTOMATION IN CONSTRUCTION, 2024, 166
  • [5] Combining Photometric Normals and Multi-View Stereo for 3D Reconstruction
    Grochulla, Martin
    Thormaehlen, Thorsten
    [J]. CVMP 2015: PROCEEDINGS OF THE 12TH EUROPEAN CONFERENCE ON VISUAL MEDIA PRODUCTION, 2015,
  • [6] PlaneMVS: 3D Plane Reconstruction from Multi-View Stereo
    Liu, Jiachen
    Ji, Pan
    Bansal, Nitin
    Cai, Changjiang
    Yan, Qingan
    Huang, Xiaolei
    Xu, Yi
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 8655 - 8665
  • [7] Improvement on Matching Breakage of Multi-View Stereo 3D Reconstruction
    Lin, Hung-Lin
    Lin, Tsung-Yi
    Li, Yi-Xuan
    Tseng, Yu-Sheng
    Li, Xin-Yi
    Cal, Qlan-Wen
    Chen, Zheng
    Shi, Yi-Rou
    [J]. PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON ADVANCED MATERIALS FOR SCIENCE AND ENGINEERING (IEEE-ICAMSE 2016), 2016, : 423 - 425
  • [8] 3D Face Reconstruction based on Multi-View Stereo Algorithm
    Peng, Keju
    Guan, Tao
    Xu, Chao
    Zhou, Dongxiang
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 2310 - 2314
  • [9] Revisiting PatchMatch Multi-View Stereo for Urban 3D Reconstruction
    Orsingher, Marco
    Zani, Paolo
    Medici, Paolo
    Bertozzi, Massimo
    [J]. 2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 190 - 196
  • [10] Pruning multi-view stereo net for efficient 3D reconstruction
    Xiang, Xiang
    Wang, Zhiyuan
    Lao, Shanshan
    Zhang, Baochang
    [J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2020, 168 : 17 - 27