Deep Image Registration With Depth-Aware Homography Estimation

被引:3
|
作者
Huang, Chenwei [1 ]
Pan, Xiong [1 ]
Cheng, Jingchun [1 ]
Song, Jiajie [1 ]
机构
[1] Beihang Univ, Inst Opt & Elect, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
Image registration; Estimation; Cameras; Training; Signal processing algorithms; Optimization; Mathematical models; Homography estimation; image matching; depth-aware homography; pixel-wise image registration;
D O I
10.1109/LSP.2023.3238274
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image registration is a basic task in computer vision, for its wide potential applications in image stitching, stereo vision, motion estimation, and etc. Most current methods achieve image registration by estimating a global homography matrix between candidate images with point-feature-based matching or direct prediction. However, as real-world 3D scenes have point-variant photograph distances (depth), a unified homography matrix is not sufficient to depict the specific pixel-wise relations between two images. Some researchers try to alleviate this problem by predicting multiple homography matrixes for different patches or segmentation areas in images; in this letter, we tackle this problem with further refinement, i.e. matching images with pixel-wise, depth-aware homography estimation. Firstly, we construct an efficient convolutional network, the DPH-Net, to predict the essential parameters causing image deviation, the rotation ($R$) and translation ($T$) of cameras. Then, we feed-in an image depth map for the calculation of initial pixel-wise homography matrixes, which are refined with an online optimization scheme. Finally, with the estimated pixel-specific homography parameters, pixel correspondences between candidate images can be easily computed for registration. Compared with state-of-the-art image registration algorithms, the proposed DPH-Net has the highest performance of 0.912 EPE and 0.977 SSIM, demonstrating the effectiveness of adding depth information and estimating pixel-wise homography into the image registration process.
引用
收藏
页码:6 / 10
页数:5
相关论文
共 50 条
  • [1] Depth-Aware Multi-Grid Deep Homography Estimation With Contextual Correlation
    Nie, Lang
    Lin, Chunyu
    Liao, Kang
    Liu, Shuaicheng
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (07) : 4460 - 4472
  • [2] REINFORCED DEPTH-AWARE DEEP LEARNING FOR SINGLE IMAGE DEHAZING
    Guo, Tiantong
    Monga, Vishal
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8891 - 8895
  • [3] Depth-Aware Image Seam Carving
    Shen, Jianbing
    Wang, Dapeng
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (05) : 1453 - 1461
  • [4] Depth-Aware Image Colorization Network
    Chu, Wei-Ta
    Hsu, Yu-Ting
    PROCEEDINGS OF THE 2018 WORKSHOP ON UNDERSTANDING SUBJECTIVE ATTRIBUTES OF DATA, WITH THE FOCUS ON EVOKED EMOTIONS (EE-USAD'18), 2018, : 17 - 23
  • [5] Depth-aware image vectorization and editing
    Shufang Lu
    Wei Jiang
    Xuefeng Ding
    Craig S. Kaplan
    Xiaogang Jin
    Fei Gao
    Jiazhou Chen
    The Visual Computer, 2019, 35 : 1027 - 1039
  • [6] Depth-aware image vectorization and editing
    Lu, Shufang
    Jiang, Wei
    Ding, Xuefeng
    Kaplan, Craig S.
    Jin, Xiaogang
    Gao, Fei
    Chen, Jiazhou
    VISUAL COMPUTER, 2019, 35 (6-8): : 1027 - 1039
  • [7] Depth-aware pose estimation using deep learning for exoskeleton gait analysis
    Yachun Wang
    Zhongcai Pei
    Chen Wang
    Zhiyong Tang
    Scientific Reports, 13
  • [8] Depth-aware pose estimation using deep learning for exoskeleton gait analysis
    Wang, Yachun
    Pei, Zhongcai
    Wang, Chen
    Tang, Zhiyong
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [9] Detail-Aware Deep Homography Estimation for Infrared and Visible Image
    Luo, Yinhui
    Wang, Xingyi
    Wu, Yuezhou
    Shu, Chang
    ELECTRONICS, 2022, 11 (24)
  • [10] Interactive Depth-Aware Effects for Stereo Image Editing
    Abbott, Joshua
    Morse, Bryan
    2013 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2013), 2013, : 263 - 270