Edge-Aware Spatial Propagation Network for Multi-view Depth Estimation

被引:0
|
作者
Xu, Siyuan [1 ]
Xu, Qingshan [1 ]
Su, Wanjuan [1 ]
Tao, Wenbing [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan, Peoples R China
关键词
Multi-view stereo; Depth estimation; Edge clues; 3D reconstruction;
D O I
10.1007/s11063-023-11356-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning has made great improvements in multi-view stereo. Recent approaches typically adopt raw images as input and estimate depth through deep networks. However, as a primary geometric cue, edge information, which captures the structures of scenes well, is ignored by the existing multi-view stereo networks. To this end, we present an Edge-aware Spatial Propagation Network, named ESPDepth, a novel depth estimation network that utilizes edges to assist in the understanding of scene structures. To be exact, we first generate a coarse initial depth map with a shallow network. Then we design an Edge Information Encoding (EIE) module, to encode edge-aware features from the initial depth. Subsequently, we apply the proposed Edge-Aware spatial Propagation (EAP) module, to guide the iterative propagation on cost volumes. Finally, the edge optimized cost volumes are utilized to obtain the final depth map, serving as a refinement process. By introducing the edge information in the propagation of cost volumes, the proposed method performs well when capturing geometric shapes, thus alleviating the negative effects of the greatly changed depth on edges of real scenes. Experiments on ScanNet and 7-Scenes datasets demonstrate our method produces precise depth estimation, gaining improvements both on global structures and detailed regions.
引用
收藏
页码:10905 / 10923
页数:19
相关论文
共 50 条
  • [1] Edge-Aware Spatial Propagation Network for Multi-view Depth Estimation
    Siyuan Xu
    Qingshan Xu
    Wanjuan Su
    Wenbing Tao
    [J]. Neural Processing Letters, 2023, 55 : 10905 - 10923
  • [2] Fast and Accurate Satellite Multi-view Stereo using Edge-Aware Interpolation
    Wang, Ke
    Frahm, Jan-Michael
    [J]. PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 365 - 373
  • [3] Efficient Edge-Preserving Multi-View Stereo Network for Depth Estimation
    Su, Wanjuan
    Tao, Wenbing
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2348 - 2356
  • [4] Edge-Aware Monocular Dense Depth Estimation with Morphology
    Li, Zhi
    Zhu, Xiaoyang
    Yu, Haitao
    Zhang, Qi
    Jiang, Yongshi
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 2935 - 2942
  • [5] Asymmetric Edge-Aware Transformers for Monocular Endoscopic Depth Estimation
    Wu, Ming
    Qi, Hao
    Fan, Wenkang
    Ke, Sunkui
    Zeng, Hui-Qing
    Chen, Yinran
    Luo, Xiongbiao
    [J]. IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, MEDICAL IMAGING 2024, 2024, 12928
  • [6] Uncertainty Guided Multi-View Stereo Network for Depth Estimation
    Su, Wanjuan
    Xu, Qingshan
    Tao, Wenbing
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (11) : 7796 - 7808
  • [7] Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry
    Bae, Gwangbin
    Budvytis, Ignas
    Cipolla, Roberto
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2832 - 2841
  • [8] Unsupervised Multi-View Constrained Convolutional Network for Accurate Depth Estimation
    Zhang, Yuyang
    Xu, Shibiao
    Wu, Baoyuan
    Shi, Jian
    Meng, Weiliang
    Zhang, Xiaopeng
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 7019 - 7031
  • [9] Multi-Resolution Edge-aware Lighting Enhancement Network
    Gong, Wenyong
    Chen, Wenzhu
    Yu, Zhongwei
    Xie, Xiaohua
    [J]. COMPUTERS & GRAPHICS-UK, 2023, 116 : 55 - 63
  • [10] EPN: Edge-Aware PointNet for Object Recognition from Multi-View 2.5D Point Clouds
    Ahmed, Syeda Mariam
    Liang, Pan
    Chew, Chee Meng
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 3445 - 3450