Depth-Aware Multi-Grid Deep Homography Estimation With Contextual Correlation

被引:32
|
作者
Nie, Lang [1 ,2 ]
Lin, Chunyu [1 ,2 ]
Liao, Kang [1 ,2 ]
Liu, Shuaicheng [3 ]
Zhao, Yao [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Estimation; Shape; Feature extraction; Correlation; Costs; Deep learning; Strain; Homography estimation; mesh deformation; IMAGE; FEATURES;
D O I
10.1109/TCSVT.2021.3125736
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Homography estimation is an important task in computer vision applications, such as image stitching, video stabilization, and camera calibration. Traditional homography estimation methods heavily depend on the quantity and distribution of feature correspondences, leading to poor robustness in low-texture scenes. The learning solutions, on the contrary, try to learn robust deep features but demonstrate unsatisfying performance in the scenes with low overlap rates. In this paper, we address these two problems simultaneously by designing a contextual correlation layer (CCL). The CCL can efficiently capture the long-range correlation within feature maps and can be flexibly used in a learning framework. In addition, considering that a single homography can not represent the complex spatial transformation in depth-varying images with parallax, we propose to predict multi-grid homography from global to local. Moreover, we equip our network with a depth perception capability, by introducing a novel depth-aware shape-preserved loss. Extensive experiments demonstrate the superiority of our method over state-of-the-art solutions in the synthetic benchmark dataset and real-world dataset. The codes and models will be available at https://github.com/nie-lang/Multi-Grid-Deep-Homography.
引用
收藏
页码:4460 / 4472
页数:13
相关论文
共 50 条
  • [1] Deep Image Registration With Depth-Aware Homography Estimation
    Huang, Chenwei
    Pan, Xiong
    Cheng, Jingchun
    Song, Jiajie
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 6 - 10
  • [2] Multi-Scale Correlation for Deep Homography Estimation
    Ke, Nan
    Shang, Zhaowei
    Zhao, Lingzhi
    Wang, Yingxin
    Zhou, Mingling
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (08)
  • [3] Depth-aware pose estimation using deep learning for exoskeleton gait analysis
    Yachun Wang
    Zhongcai Pei
    Chen Wang
    Zhiyong Tang
    Scientific Reports, 13
  • [4] Depth-aware pose estimation using deep learning for exoskeleton gait analysis
    Wang, Yachun
    Pei, Zhongcai
    Wang, Chen
    Tang, Zhiyong
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [5] MGHE-Net: A Transformer-Based Multi-Grid Homography Estimation Network for Image Stitching
    Tang, Yun
    Tian, Siyuan
    Shuai, Pengfei
    Duan, Yu
    IEEE ACCESS, 2024, 12 : 49216 - 49227
  • [6] Learning Depth-Aware Deep Representations for Robotic Perception
    Porzi, Lorenzo
    Bulo, Samuel Rota
    Penate-Sanchez, Adrian
    Ricci, Elisa
    Moreno-Noguer, Francesc
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02): : 468 - 475
  • [7] REINFORCED DEPTH-AWARE DEEP LEARNING FOR SINGLE IMAGE DEHAZING
    Guo, Tiantong
    Monga, Vishal
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8891 - 8895
  • [8] DAL: A Deep Depth-Aware Long-term Tracker
    Qian, Yanlin
    Yan, Song
    Lukezic, Alan
    Kristan, Matej
    Kamardinen, Joni-Kristian
    Matas, Jiri
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7825 - 7832
  • [9] PEDESTRIAN PROPOSAL GENERATION USING DEPTH-AWARE SCALE ESTIMATION
    Park, Kihong
    Kim, Seungryong
    Sohn, Kwanghoon
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 2045 - 2049
  • [10] Deep Homography Estimation With Feature Correlation Transformer
    Zhou, Haoyu
    Hu, Wei
    Li, Ying
    He, Chu
    Chen, Xi
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1397 - 1402