Feature back-projection guided residual refinement for real-time stereo matching network

被引:3
|
作者
Wen, Bin [1 ,2 ]
Zhu, Han [1 ]
Yang, Chao [1 ,2 ]
Li, Zhicong [1 ]
Cao, Renxuan [1 ]
机构
[1] China Three Gorges Univ, Coll Elect Engn & New Energy, Yichang 443000, Peoples R China
[2] China Three Gorges Univ, Hubei Prov Collaborat Innovat Ctr New Energy Micr, Yichang 443000, Peoples R China
关键词
Convolution neural networks; Feature back-projection; Real-time; Residual refinement; Stereo matching;
D O I
10.1016/j.image.2022.116636
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In recent stereo matching research, deep convolutional neural networks (CNNs) have shown excellent performance to estimate depth from stereo image pairs. Previous works mainly focus on improving the robust performance of the stereo matching network to obtain higher matching accuracy. In this paper, we propose an end-to-end real-time stereo matching network (FBPGNet). FBPGNet manifests its characteristics in three parts: feature extraction module (FEM), initial disparity estimation module (IDEM), feature back-projection guided residual refinement module (FBPG) The FEM is designed to capture semantic and contextual information, which is composed of residual block, dilation convolution and spatial attention mechanism. The IDEM is proposed to produce an initial low-resolution (LR) disparity map, which utilizes an hourglass 3D convolution architecture. In addition, the FBPG is employed to refine the up-sampled low-resolution disparity map, which takes the features from the FEM and the low-resolution disparity map as guide information. Experiments show that the proposed stereo matching network has comparable prediction accuracy and inference speed compared with recent real-time stereo matching networks, and can achieve 25 fps on high-end GPU.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Guided aggregation and disparity refinement for real-time stereo matching
    Yang, Jinlong
    Wu, Cheng
    Wang, Gang
    Chen, Dong
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (05) : 4467 - 4477
  • [2] Feature-Guided Spatial Attention Upsampling for Real-Time Stereo Matching Network
    Xie, Yun
    Zheng, Shaowu
    Li, Weihua
    [J]. IEEE MULTIMEDIA, 2021, 28 (01) : 38 - 47
  • [3] HITNet: Hierarchical Iterative Tile Refinement Network for Real-time Stereo Matching
    Tankovich, Vladimir
    Hane, Christian
    Zhang, Yinda
    Kowdle, Adarsh
    Fanello, Sean
    Bouaziz, Sofien
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14357 - 14367
  • [4] ITERATIVE REFINEMENT FOR REAL-TIME LOCAL STEREO MATCHING
    Dumont, Maarten
    Goorts, Patrik
    Maesen, Steven
    Degraen, Donald
    Bekaert, Philippe
    Lafruit, Gauthier
    [J]. 2014 INTERNATIONAL CONFERENCE ON 3D IMAGING (IC3D), 2014,
  • [5] Real-Time Stereo Matching Algorithm with Hierarchical Refinement
    Wang, Yufeng
    Wang, Hongwei
    Liu, Yu
    Yang, Mingquan
    Quan, Jicheng
    [J]. Guangxue Xuebao/Acta Optica Sinica, 2020, 40 (09):
  • [6] Real-Time Object Detection Algorithm Based on Back-Projection
    Zhang, Chen
    Qian, Xu
    [J]. MECHATRONICS, ROBOTICS AND AUTOMATION, PTS 1-3, 2013, 373-375 : 483 - 486
  • [7] REAL-TIME STEREO MATCHING NETWORK WITH HIGH ACCURACY
    Lee, Hyunmin
    Shin, Yongho
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4280 - 4284
  • [8] GHRNET: GUIDED HIERARCHICAL REFINEMENT NETWORK FOR STEREO MATCHING
    Tan, Bin
    Chen, Kai
    Yao, Jian
    Li, Jie
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 4459 - 4463
  • [9] Entropy Based Log Chromaticity Projection for Real-time Stereo Matching
    Raghavendra, U.
    Makkithaya, Krishnamoorthi
    Karunakar, A. K.
    [J]. 2ND INTERNATIONAL CONFERENCE ON COMMUNICATION, COMPUTING & SECURITY [ICCCS-2012], 2012, 1 : 223 - 230
  • [10] Context-Guided Multi-view Stereo with Depth Back-Projection
    Feng, Tianxing
    Zhang, Zhe
    Xiong, Kaiqiang
    Wang, Ronggang
    [J]. MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 91 - 102