360MVSNet: Deep Multi-view Stereo Network with 360° Images for Indoor Scene Reconstruction

被引:1
|
作者
Chiu, Ching-Ya [1 ]
Wu, Yu-Ting [2 ]
Shen, I-Chao [3 ]
Chuang, Yung-Yu [1 ]
机构
[1] Natl Taiwan Univ, New Taipei, Taiwan
[2] Natl Taipei Univ, New Taipei, Taiwan
[3] Univ Tokyo, Tokyo, Japan
关键词
D O I
10.1109/WACV56688.2023.00307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent multi-view stereo methods have achieved promising results with the advancement of deep learning techniques. Despite of the progress, due to the limited fields of view of regular images, reconstructing large indoor environments still requires collecting many images with sufficient visual overlap, which is quite labor-intensive. 360 degrees images cover a much larger field of view than regular images and would facilitate the capture process. In this paper, we present 360MVSNet, the first deep learning network for multi-view stereo with 360 degrees images. Our method combines uncertainty estimation with a spherical sweeping module for 360 degrees images captured from multiple viewpoints in order to construct multi-scale cost volumes. By regressing volumes in a coarse-to-fine manner, high-resolution depth maps can be obtained. Furthermore, we have constructed EQMVS, a large-scale synthetic dataset that consists of over 50K pairs of RGB and depth maps in equirectangular projection. Experimental results demonstrate that our method can reconstruct large synthetic and real-world indoor scenes with significantly better completeness than previous traditional and learning-based methods while saving both time and effort in the data acquisition process.
引用
收藏
页码:3056 / 3065
页数:10
相关论文
共 50 条
  • [1] DRI-MVSNet: A depth residual inference network for multi-view stereo images
    Li, Ying
    Li, Wenyue
    Zhao, Zhijie
    Fan, JiaHao
    PLOS ONE, 2022, 17 (03):
  • [2] LE-MVSNet: Lightweight Efficient Multi-view Stereo Network
    Kong, Changfei
    Zhang, Ziyi
    Mao, Jiafa
    Chan, Sixian
    Sheng, Weigou
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VIII, 2023, 14261 : 484 - 497
  • [3] An Efficient Multi-view Stereo Reconstruction Method Based On MA-MVSNet
    Zhang, Xiaoyan
    Shi, Hao
    Wang, Chaozheng
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 456 - 463
  • [4] MVSNet: Depth Inference for Unstructured Multi-view Stereo
    Yao, Yao
    Luo, Zixin
    Li, Shiwei
    Fang, Tian
    Quan, Long
    COMPUTER VISION - ECCV 2018, PT VIII, 2018, 11212 : 785 - 801
  • [5] Vis-MVSNet: Visibility-Aware Multi-view Stereo Network
    Zhang, Jingyang
    Li, Shiwei
    Luo, Zixin
    Fang, Tian
    Yao, Yao
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2023, 131 (01) : 199 - 214
  • [6] Vis-MVSNet: Visibility-Aware Multi-view Stereo Network
    Jingyang Zhang
    Shiwei Li
    Zixin Luo
    Tian Fang
    Yao Yao
    International Journal of Computer Vision, 2023, 131 : 199 - 214
  • [7] OD-MVSNet: Omni-dimensional dynamic multi-view stereo network
    Pan, Ke
    Li, Kefeng
    Zhang, Guangyuan
    Zhu, Zhenfang
    Wang, Peng
    Wang, Zhenfei
    Fu, Chen
    Li, Guangchen
    Ding, Yuxuan
    PLOS ONE, 2024, 19 (08):
  • [8] Piecewise planar scene reconstruction and optimization for multi-view stereo
    Kim, Hyojin
    Xiao, Hong
    Max, Nelson
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2013, 7727 LNCS (PART 4): : 191 - 204
  • [9] DAR-MVSNet: a novel dual attention residual network for multi-view stereo
    Li, Tingshuai
    Liang, Hu
    Wen, Changchun
    Qu, Jiacheng
    Zhao, Shengrong
    Zhang, Qingmeng
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (8-9) : 5857 - 5866
  • [10] SA-MVSNet: Self-attention-based multi-view stereo network for 3D reconstruction of images with weak texture
    Yang, Ronghao
    Miao, Wang
    Zhang, Zhenxin
    Liu, Zhenlong
    Li, Mubai
    Lin, Bin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 131