Multigrained Attention Network for Infrared and Visible Image Fusion

被引:73
|
作者
Li, Jing [1 ]
Huo, Hongtao [1 ]
Li, Chang [2 ]
Wang, Renhua [1 ]
Sui, Chenhong [3 ]
Liu, Zhao [4 ]
机构
[1] Peoples Publ Secur Univ China, Dept Informat Technol & Cyber Secur, Beijing 100038, Peoples R China
[2] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[3] Yantai Univ, Sch Optoelect Informat Sci & Technol, Yantai 264000, Peoples R China
[4] Peoples Publ Secur Univ China, Grad Sch, Beijing 100038, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature loss; generative adversarial network (GAN); image fusion; multigrained attention mechanism; PERFORMANCE; TRANSFORM; MODEL;
D O I
10.1109/TIM.2020.3029360
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Methods based on generative adversarial network (GAN) have been widely used in infrared and visible images fusion. However, these methods cannot perceive the discriminative parts of an image. Therefore, we introduce a multigrained attention module into encoder-decoder network to fuse infrared and visible images (MgAN-Fuse). The infrared and visible images are encoded by two independent encoder networks due to their diverse modalities. Then, the results of the two encoders are concatenated to calculate the fused result by the decoder. To exploit the features of multiscale layers fully and force the model focus on the discriminative regions, we integrate attention modules into multiscale layers of the encoder to obtain multigrained attention maps, and then, the multigrained attention maps are concatenated with the corresponding multiscale features of the decoder network. Thus, the proposed method can preserve the foreground target information of the infrared image and capture the context information of the visible image. Furthermore, we design an additional feature loss in the training process to preserve the important features of the visible image, and a dual adversarial architecture is employed to help the model capture enough infrared intensity information and visible details simultaneously. The ablation studies illustrate the validity of the multigrained attention network and feature loss function. Extensive experiments on two infrared and visible image data sets demonstrate that the proposed MgAN-Fuse has a better performance than state-of-the-art methods.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Infrared and Visible Image Fusion via Decoupling Network
    Wang, Xue
    Guan, Zheng
    Yu, Shishuang
    Cao, Jinde
    Li, Ya
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [32] Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network
    Xu, Dongdong
    Wang, Yongcheng
    Xu, Shuyan
    Zhu, Kaiguang
    Zhang, Ning
    Zhang, Xin
    APPLIED SCIENCES-BASEL, 2020, 10 (02):
  • [33] Infrared and visible image fusion method based on hierarchical attention mechanism
    Li, Qinghua
    Yan, Bao
    Luo, Delin
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (02)
  • [34] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Li Gao
    DeLin Luo
    Song Wang
    Science China Technological Sciences, 2024, 67 : 408 - 422
  • [35] Unsupervised Infrared and Visible Image Fusion with Pixel Self-attention
    Cui, Saijia
    Zhou, Zhiqiang
    Li, Linhao
    Fei, Erfang
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 437 - 441
  • [36] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Gao, Li
    Luo, Delin
    Wang, Song
    SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2024, 67 (02) : 408 - 422
  • [37] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    GAO Li
    LUO DeLin
    WANG Song
    Science China Technological Sciences, 2024, 67 (02) : 408 - 422
  • [38] Infrared and visible light image fusion based on convolution and self attention
    Chen, Xiaoxuan
    Xu, Shuwen
    Hu, Shaohai
    Ma, Xiaole
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (08): : 2641 - 2649
  • [39] DATFuse: Infrared and Visible Image Fusion via Dual Attention Transformer
    Tang, Wei
    He, Fazhi
    Liu, Yu
    Duan, Yansong
    Si, Tongzhen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (07) : 3159 - 3172
  • [40] CAFNET: Cross-Attention Fusion Network for Infrared and Low Illumination Visible-Light Image
    Zhou, Xiaoling
    Jiang, Zetao
    Okuwobi, Idowu Paul
    NEURAL PROCESSING LETTERS, 2023, 55 (05) : 6027 - 6041