Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss

被引:11
|
作者
Xu, Dongdong [1 ]
Wang, Yongcheng [1 ]
Zhang, Xin [1 ,2 ]
Zhang, Ning [1 ,2 ]
Yu, Sibo [1 ,2 ]
机构
[1] Chinese Acad Sci, Changchun Inst Opt Fine Mech & Phys, Changchun 130033, Peoples R China
[2] Univ Chinese Acad Sci, Coll Mat Sci & Optoelect Technol, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Feature extraction; Training; Convolution; Kernel; Data mining; Deep learning; Infrared and visible images; deep learning; unsupervised image fusion; densely connected convolutional network; perceptual loss; MULTISCALE-DECOMPOSITION; PERFORMANCE;
D O I
10.1109/ACCESS.2020.3037770
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The fusion of infrared and visible images can utilize the indication characteristics and the textural details of source images to realize the all-weather detection. The deep learning (DL) based fusion solutions can reduce the computational cost and complexity compared with traditional methods since there is no need to design complex feature extraction methods and fusion rules. There are no standard reference images and the publicly available infrared and visible image pairs are scarce. Most supervised DL-based solutions have to take pre-training on other labeled large datasets which may not behave well when testing. The few unsupervised fusion methods can hardly obtain ideal images with good visual impression. In this paper, an infrared and visible image fusion method based on unsupervised convolutional neural network is proposed. When designing the network structure, densely connected convolutional network (DenseNet) is used as the sub-network for feature extraction and reconstruction to ensure that more information of source images can be retained in the fusion images. As to loss function, the perceptual loss is creatively introduced and combined with the structure similarity loss to constrain the updating of weight parameters during the back propagation. The perceptual loss designed helps to improve the visual information fidelity (VIF) of the fusion image effectively. Experimental results show that this method can obtain fusion images with prominent targets and obvious details. Compared with other 7 traditional and deep learning methods, the fusion results of this method are better on objective evaluation and visual observation when taken together.
引用
收藏
页码:206445 / 206458
页数:14
相关论文
共 50 条
  • [1] Infrared and Visible Image Fusion using a Deep Learning Framework
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2705 - 2710
  • [2] Unsupervised Infrared Image and Visible Image Fusion Algorithm Based on Deep Learning
    Chen Guoyang
    Wu Xiaojun
    Xu Tianyang
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (04)
  • [3] VIF-Net: An Unsupervised Framework for Infrared and Visible Image Fusion
    Hou, Ruichao
    Zhou, Dongming
    Nie, Rencan
    Liu, Dong
    Xiong, Lei
    Guo, Yanbu
    Yu, Chuanbo
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2020, 6 : 640 - 651
  • [4] Visible and Infrared Image Fusion Using Deep Learning
    Zhang, Xingchen
    Demiris, Yiannis
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 10535 - 10554
  • [5] StyleFuse: An unsupervised network based on style loss function for infrared and visible image fusion
    Cheng, Chen
    Sun, Cheng
    Sun, Yongqi
    Zhu, Jiahui
    Signal Processing: Image Communication, 2022, 106
  • [6] Infrared and visible image fusion using a guiding network to leverage perceptual similarity
    Kim, Jun-Hyung
    Hwang, Youngbae
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 227
  • [7] A Deep Learning Framework for Infrared and Visible Image Fusion Without Strict Registration
    Huafeng Li
    Junyu Liu
    Yafei Zhang
    Yu Liu
    International Journal of Computer Vision, 2024, 132 : 1625 - 1644
  • [8] A Deep Learning Framework for Infrared and Visible Image Fusion Without Strict Registration
    Li, Huafeng
    Liu, Junyu
    Zhang, Yafei
    Liu, Yu
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) : 1625 - 1644
  • [9] Unsupervised densely attention network for infrared and visible image fusion
    Li, Yang
    Wang, Jixiao
    Miao, Zhuang
    Wang, Jiabao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (45-46) : 34685 - 34696
  • [10] Unsupervised densely attention network for infrared and visible image fusion
    Yang Li
    Jixiao Wang
    Zhuang Miao
    Jiabao Wang
    Multimedia Tools and Applications, 2020, 79 : 34685 - 34696