Multigrained Attention Network for Infrared and Visible Image Fusion

被引:73
|
作者
Li, Jing [1 ]
Huo, Hongtao [1 ]
Li, Chang [2 ]
Wang, Renhua [1 ]
Sui, Chenhong [3 ]
Liu, Zhao [4 ]
机构
[1] Peoples Publ Secur Univ China, Dept Informat Technol & Cyber Secur, Beijing 100038, Peoples R China
[2] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[3] Yantai Univ, Sch Optoelect Informat Sci & Technol, Yantai 264000, Peoples R China
[4] Peoples Publ Secur Univ China, Grad Sch, Beijing 100038, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature loss; generative adversarial network (GAN); image fusion; multigrained attention mechanism; PERFORMANCE; TRANSFORM; MODEL;
D O I
10.1109/TIM.2020.3029360
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Methods based on generative adversarial network (GAN) have been widely used in infrared and visible images fusion. However, these methods cannot perceive the discriminative parts of an image. Therefore, we introduce a multigrained attention module into encoder-decoder network to fuse infrared and visible images (MgAN-Fuse). The infrared and visible images are encoded by two independent encoder networks due to their diverse modalities. Then, the results of the two encoders are concatenated to calculate the fused result by the decoder. To exploit the features of multiscale layers fully and force the model focus on the discriminative regions, we integrate attention modules into multiscale layers of the encoder to obtain multigrained attention maps, and then, the multigrained attention maps are concatenated with the corresponding multiscale features of the decoder network. Thus, the proposed method can preserve the foreground target information of the infrared image and capture the context information of the visible image. Furthermore, we design an additional feature loss in the training process to preserve the important features of the visible image, and a dual adversarial architecture is employed to help the model capture enough infrared intensity information and visible details simultaneously. The ablation studies illustrate the validity of the multigrained attention network and feature loss function. Extensive experiments on two infrared and visible image data sets demonstrate that the proposed MgAN-Fuse has a better performance than state-of-the-art methods.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] TFIV: Multigrained Token Fusion for Infrared and Visible Image via Transformer
    Li, Jing
    Yang, Bin
    Bai, Lu
    Dou, Hao
    Li, Chang
    Ma, Lingfei
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [2] Unsupervised densely attention network for infrared and visible image fusion
    Li, Yang
    Wang, Jixiao
    Miao, Zhuang
    Wang, Jiabao
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34685 - 34696
  • [3] MAFusion: Multiscale Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Chen, Houjin
    Li, Yanfeng
    Peng, Yahui
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [4] Multiscale channel attention network for infrared and visible image fusion
    Zhu, Jiahui
    Dou, Qingyu
    Jian, Lihua
    Liu, Kai
    Hussain, Farhan
    Yang, Xiaomin
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (22):
  • [5] Unsupervised densely attention network for infrared and visible image fusion
    Yang Li
    Jixiao Wang
    Zhuang Miao
    Jiabao Wang
    [J]. Multimedia Tools and Applications, 2020, 79 : 34685 - 34696
  • [6] Self-Attention Progressive Network for Infrared and Visible Image Fusion
    Li, Shuying
    Han, Muyi
    Qin, Yuemei
    Li, Qiang
    [J]. REMOTE SENSING, 2024, 16 (18)
  • [7] Infrared and visible image fusion based on dilated residual attention network
    Mustafa, Hafiz Tayyab
    Yang, Jie
    Mustafa, Hamza
    Zareapoor, Masoumeh
    [J]. OPTIK, 2020, 224
  • [8] Attention based dual UNET network for infrared and visible image fusion
    Wang, Xuejiao
    Hua, Zhen
    Li, Jinjiang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (25) : 66959 - 66980
  • [9] Infrared and Visible Image Fusion Using Detail Enhanced Channel Attention Network
    Cui, Yinghan
    Du, Huiqian
    Mei, Wenbo
    [J]. IEEE ACCESS, 2019, 7 : 182185 - 182197
  • [10] RAN: Infrared and Visible Image Fusion Network Based on Residual Attention Decomposition
    Yu, Jia
    Lu, Gehao
    Zhang, Jie
    [J]. ELECTRONICS, 2024, 13 (14)