Infrared and visible image fusion algorithm based on progressive difference-aware attention

被引:0
|
作者
Li X. [1 ]
Feng Y. [1 ]
Zhang Y. [1 ]
机构
[1] School of Energy and Electrical Engineering, Chang’an University, Xi’an
关键词
cross-modality; deep learning; image fusion; visible-infrared fusion;
D O I
10.1360/SST-2023-0148
中图分类号
学科分类号
摘要
The fusion of infrared images and visible images is an important research direction in image fusion. Different image sources can provide complementary knowledge. The fused image contains more information, which leads to better recognition and analysis performance. Currently, there are two main approaches: traditional methods and deep learning-based methods. This paper proposes a new progressive cross-modal difference-aware image fusion network based on existing methods and establishes an end-to-end visible-infrared image fusion model. The model adopts a CNN-based framework as the backbone, consisting of a progressive feature extractor and an image reconstructor. Firstly, the algorithm establishes two feature extraction branches for visible and infrared images, respectively, and introduces a differential aware attention module (DAAM) between the two branches. This module enables the network to gradually integrate complementary information in the feature extraction stage. Therefore, the feature extractor can fully extract common and complementary features from both infrared and visible images. Then, the extracted deep features are fused through an intermediate fusion strategy, combining the features of visible and infrared images to obtain the best possible fusion result. The fused image is then reconstructed using an image reconstructor. Finally, the performance of the proposed method is tested by comparing it with other relevant methods, and the experimental results show that the proposed method can effectively improve the fusion effect. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:1183 / 1197
页数:14
相关论文
共 65 条
  • [1] Li Y, Yang H T, Kong Z, Et al., A review of pixel-level infrared and visible image fusion methods, Comput Eng Appl, 58, pp. 40-50, (2022)
  • [2] Shin H K, Zhang X F, Wang Y, Et al., Pixel-level convolutional neural network multi-focus image fusion algorithm, J Jilin Univ (Eng Ed), 52, pp. 1857-1864, (2022)
  • [3] Chen Y, Liu L, Rao Y, Et al., Identifying the “Dangshan” physiological disease of pear woolliness response via feature-level fusion of near-infrared spectroscopy and visual RGB image, Foods, 12, (2023)
  • [4] Hu Z, Jing Y, Wu G., Decision-level fusion detection method of visible and infrared images under low light conditions, EURASIP J Adv Signal Process, 2023, (2023)
  • [5] Othman N A, Abdel-Fattah M A, Ali A T., A hybrid deep learning framework with decision-level fusion for breast cancer survival prediction, Big Data Cogn Comput, 7, (2023)
  • [6] Zou D, Yang B., Infrared and low-light visible image fusion based on hybrid multiscale decomposition and adaptive light adjustment, Opt Lasers Eng, 160, (2023)
  • [7] Liu Y, Dong L, Xu W., Infrared and visible image fusion via salient object extraction and low-light region enhancement, Infrared Phys Tech, 124, (2022)
  • [8] Li C, Tang S, Yan J, Et al., Low-light image enhancement based on quasi-symmetric correction functions by fusion, Symmetry, 12, (2020)
  • [9] Guan Y, Chen X A, Tian J D, Et al., Low-illumination image enhancement based on multi-exposure image generation (in Chinese), Robotics, 45, pp. 422-430, (2023)
  • [10] Li L, Chen M J, Shi H D, Et al., Research on image restoration algorithm based on BIFPN-GAN feature fusion (in Chinese), Radio Eng, 52, pp. 2141-2148, (2022)