CABnet: A channel attention dual adversarial balancing network for multimodal image fusion

被引:3
|
作者
Sun, Le [1 ]
Tang, Mengqi [1 ]
Muhammad, Ghulam [2 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Ctr Atmospher Environm & Equipment Technol CICAEET, Dept Jiangsu Collaborat Innovat, Nanjing 210044, Jiangsu, Peoples R China
[2] King Saud Univ, Coll Comp & Informat Sci, Dept Comp Engn, Riyadh 11543, Saudi Arabia
关键词
Image processing; Infrared and visible image fusion; Complementary information extract; Generative adversarial networks; Adaptive factor; ENSEMBLE;
D O I
10.1016/j.imavis.2024.105065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared and visible image fusion aims to generate informative images by leveraging the distinctive strengths of infrared and visible modalities. These fused images play a crucial role in subsequent downstream tasks, including object detection, recognition, and segmentation. However, complementary information is often difficult to extract. Existing generative adversarial network-based methods generate fused images by modifying the distribution of source images' features to preserve instances and texture details in both infrared and visible images. Nevertheless, these approaches may result in a degradation of the fused image quality when the original image quality is low. Considering the balance of information from different modalities can improve the quality of the fused image. Hence, we introduce CABnet, a Channel Attention dual adversarial Balancing network. CABnet incorporates a channel attention mechanism to capture crucial channel features, thereby, enhancing complementary information. It also employs an adaptive factor to control the mixing distribution of infrared and visible images, which ensures the preservation of instances and texture details during the adversarial process. To enhance efficiency and reduce reliance on manual labeling, our training process adopts a semi-supervised learning strategy. Through qualitative and quantitative experiments across multiple datasets, CABnet surpasses existing state-of-the-art methods in fusion performance, notably achieving a 51.3% enhancement in signal to noise ratio and a 13.4% improvement in standard deviation.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] PET and MRI image fusion based on a dense convolutional network with dual attention
    Li, Bicao
    Hwang, Jenq-Neng
    Liu, Zhoufeng
    Li, Chunlei
    Wang, Zongmin
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 151
  • [42] CFDformer: Medical image segmentation based on cross fusion dual attention network
    Yang, Zhou
    Wang, Hua
    Liu, Yepeng
    Zhang, Fan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 101
  • [43] MFAGAN: A multiscale feature-attention generative adversarial network for infrared and visible image fusion
    Tang, Xuanji
    Zhao, Jufeng
    Cui, Guangmang
    Tian, Haijun
    Shi, Zhen
    Hou, Changlun
    INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [44] Infrared and visible image fusion using a feature attention guided perceptual generative adversarial network
    Chen Y.
    Zheng W.
    Shin H.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (07) : 9099 - 9112
  • [45] Infrared and visible image fusion based on edge-preserving and attention generative adversarial network
    Zhu Wen-Qing
    Tang Xin-Yi
    Zhang Rui
    Chen Xiao
    Miao Zhuang
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2021, 40 (05) : 696 - 708
  • [46] AMFNet: An attention-guided generative adversarial network for multi-model image fusion
    Wang, Jing
    Yu, Long
    Tian, Shengwei
    Wu, Weidong
    Zhang, Dezhi
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2022, 78
  • [47] Attention Fusion Generative Adversarial Network for Single-Image Super-Resolution Reconstruction
    Peng Yanfei
    Zhang Pingjia
    Gao Yi
    Zi Lingling
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (20)
  • [48] Multimodal Glioma Image Segmentation Using Dual Encoder Structure and Channel Spatial Attention Block
    Su, Run
    Liu, Jinhuai
    Zhang, Deyun
    Cheng, Chuandong
    Ye, Mingquan
    FRONTIERS IN NEUROSCIENCE, 2020, 14
  • [49] Generative Adversarial Network with Dual Discriminator and Mixed Attention
    Wang, Lei
    Yang, Jun
    Zhang, Chiyu
    Dai, Zaiyan
    Computer Engineering and Applications, 2024, 60 (07) : 212 - 221
  • [50] MFGAN: Multimodal Fusion for Industrial Anomaly Detection Using Attention-Based Autoencoder and Generative Adversarial Network
    Qu, Xinji
    Liu, Zhuo
    Wu, Chase Q.
    Hou, Aiqin
    Yin, Xiaoyan
    Chen, Zhulian
    SENSORS, 2024, 24 (02)