SaReGAN: a salient regional generative adversarial network for visible and infrared image fusion

被引:1
|
作者
Gao, Mingliang [1 ]
Zhou, Yi'nan [2 ]
Zhai, Wenzhe [1 ]
Zeng, Shuai [3 ]
Li, Qilei [4 ]
机构
[1] Shandong Univ Technol, Coll Elect & Elect Engn, Zibo 255000, Shandong, Peoples R China
[2] Genesis AI Lab, Futong Technol, Chengdu 610054, Peoples R China
[3] Sichuan Univ, West China Univ Hosp 2, Dept Obstet & Gynaecol, Chengdu, Sichuan, Peoples R China
[4] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
关键词
Smart city; Image fusion; Visible and infrared image; Generative adversarial network; Salient region; PERFORMANCE;
D O I
10.1007/s11042-023-14393-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multispectral image fusion plays a crucial role in smart city environment safety. In the domain of visible and infrared image fusion, object vanishment after fusion is a key problem which restricts the fusion performance. To address this problem, a novel Salient Regional Generative Adversarial Network GAN (SaReGAN) is presented for infrared and VIS image fusion. The SaReGAN consists of three parts. In the first part, the salient regions of infrared image are extracted by visual saliency map and the information of these regions is preserved. In the second part, the VIS image, infrared image and salient information are merged thoroughly in the generator to gain a pre-fused image. In the third part, the discriminator attempts to differentiate the pre-fused image and VIS image, in order to learn details from VIS image based on the adversarial mechanism. Experimental results verify that the SaReGAN outperforms other state-of-the-art methods in quantitative and qualitative evaluations.
引用
收藏
页码:61659 / 61671
页数:13
相关论文
共 50 条
  • [41] STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection
    Ma, Jiayi
    Tang, Linfeng
    Xu, Meilong
    Zhang, Hao
    Xiao, Guobao
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [42] DSG-Fusion: Infrared and visible image fusion via generative adversarial networks and guided filter
    Yang, Xin
    Huo, Hongtao
    Li, Jing
    Li, Chang
    Liu, Zhao
    Chen, Xun
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 200
  • [43] Double-channel cascade-based generative adversarial network for power equipment infrared and visible image fusion
    Wang, Jihong
    Yu, Haiyan
    [J]. EAI ENDORSED TRANSACTIONS ON SCALABLE INFORMATION SYSTEMS, 2022, 9 (36)
  • [44] Infrared and visible image fusion using a generative adversarial network with a dual-branch generator and matched dense blocks
    Guo, Li
    Tang, Dandan
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (05) : 1811 - 1819
  • [45] Infrared and visible image fusion using a generative adversarial network with a dual-branch generator and matched dense blocks
    Li Guo
    Dandan Tang
    [J]. Signal, Image and Video Processing, 2023, 17 : 1811 - 1819
  • [46] TransImg: A Translation Algorithm of Visible-to-Infrared Image Based on Generative Adversarial Network
    Han, Shuo
    Mo, Bo
    Xu, Junwei
    Sun, Shizun
    Zhao, Jie
    [J]. International Journal of Computational Intelligence Systems, 2024, 17 (01)
  • [47] Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance
    Li, Jing
    Huo, Hongtao
    Liu, Kejian
    Li, Chang
    [J]. INFORMATION SCIENCES, 2020, 529 : 28 - 41
  • [48] AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Feng, Qi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 1383 - 1396
  • [49] A Novel Infrared and Visible Image Fusion Approach Based on Adversarial Neural Network
    Chen, Xianglong
    Wang, Haipeng
    Liang, Yaohui
    Meng, Ying
    Wang, Shifeng
    [J]. SENSORS, 2022, 22 (01)
  • [50] Multimodal Fusion Generative Adversarial Network for Image Synthesis
    Zhao, Liang
    Hu, Qinghao
    Li, Xiaoyuan
    Zhao, Jingyuan
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 1865 - 1869