Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss

被引:11
|
作者
Xu, Dongdong [1 ]
Wang, Yongcheng [1 ]
Zhang, Xin [1 ,2 ]
Zhang, Ning [1 ,2 ]
Yu, Sibo [1 ,2 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun 130033, Peoples R China
[2] Univ Chinese Acad Sci, Coll Mat Sci & Optoelect Technol, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Feature extraction; Training; Convolution; Kernel; Data mining; Deep learning; Infrared and visible images; deep learning; unsupervised image fusion; densely connected convolutional network; perceptual loss; MULTISCALE-DECOMPOSITION; PERFORMANCE;
D O I
10.1109/ACCESS.2020.3037770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The fusion of infrared and visible images can utilize the indication characteristics and the textural details of source images to realize the all-weather detection. The deep learning (DL) based fusion solutions can reduce the computational cost and complexity compared with traditional methods since there is no need to design complex feature extraction methods and fusion rules. There are no standard reference images and the publicly available infrared and visible image pairs are scarce. Most supervised DL-based solutions have to take pre-training on other labeled large datasets which may not behave well when testing. The few unsupervised fusion methods can hardly obtain ideal images with good visual impression. In this paper, an infrared and visible image fusion method based on unsupervised convolutional neural network is proposed. When designing the network structure, densely connected convolutional network (DenseNet) is used as the sub-network for feature extraction and reconstruction to ensure that more information of source images can be retained in the fusion images. As to loss function, the perceptual loss is creatively introduced and combined with the structure similarity loss to constrain the updating of weight parameters during the back propagation. The perceptual loss designed helps to improve the visual information fidelity (VIF) of the fusion image effectively. Experimental results show that this method can obtain fusion images with prominent targets and obvious details. Compared with other 7 traditional and deep learning methods, the fusion results of this method are better on objective evaluation and visual observation when taken together.
引用
收藏
页码:206445 / 206458
页数:14
相关论文
共 50 条
  • [31] A deep learning and image enhancement based pipeline for infrared and visible image fusion
    Qi, Jin
    Eyob, Deboch
    Fanose, Mola Natnael
    Wang, Lingfeng
    Cheng, Jian
    NEUROCOMPUTING, 2024, 578
  • [32] INFRARED AND VISIBLE IMAGE FUSION USING BIMODAL TRANSFORMERS
    Park, Seonghyun
    Vien, An Gia
    Lee, Chul
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1741 - 1745
  • [33] Infrared and visible image fusion using NSCT and GGD
    Zhang, Xiuqiong
    Liu, Cuiyin
    Men, Tao
    Qin, Hongyin
    Wang, Mingrong
    THIRD INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2011), 2011, 8009
  • [34] Infrared and visible image fusion via octave Gaussian pyramid framework
    Lei Yan
    Qun Hao
    Jie Cao
    Rizvi Saad
    Kun Li
    Zhengang Yan
    Zhimin Wu
    Scientific Reports, 11
  • [35] RADFNet: An infrared and visible image fusion framework based on distributed network
    Feng, Siling
    Wu, Can
    Lin, Cong
    Huang, Mengxing
    FRONTIERS IN PLANT SCIENCE, 2023, 13
  • [36] Infrared and visible image fusion via octave Gaussian pyramid framework
    Yan, Lei
    Hao, Qun
    Cao, Jie
    Saad, Rizvi
    Li, Kun
    Yan, Zhengang
    Wu, Zhimin
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [37] EV-Fusion: A Novel Infrared and Low-Light Color Visible Image Fusion Network Integrating Unsupervised Visible Image Enhancement
    Zhang, Xin
    Wang, Xia
    Yan, Changda
    Sun, Qiyang
    IEEE SENSORS JOURNAL, 2024, 24 (04) : 4920 - 4934
  • [38] Visible and Infrared Image Fusion Framework based on RetinaNet for Marine Environment
    Farahnakian, Fahimeh
    Poikonen, Jussi
    Laurinen, Markus
    Makris, Dimitrios
    Heikkonen, Jukka
    2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [39] Maritime Infrared and Visible Image Fusion Based on Refined Features Fusion and Sobel Loss
    Gao, Zongjiang
    Zhu, Feixiang
    Chen, Haili
    Ma, Baoshan
    PHOTONICS, 2022, 9 (08)
  • [40] DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
    Wang, Hongfeng
    Wang, Jianzhong
    Xu, Haonan
    Sun, Yong
    Yu, Zibo
    SENSORS, 2022, 22 (14)