An Efficient Network Model for Visible and Infrared Image Fusion

被引:2
|
作者
Pan, Zhu [1 ]
Ouyang, Wanqi
机构
[1] Wuhan Univ Sci & Technol, Sch Machinery & Automat, Wuhan 430081, Peoples R China
来源
IEEE ACCESS | 2023年 / 11卷
基金
中国国家自然科学基金;
关键词
Convolutional neural network; multi-feature extraction; optimized network; visible and infrared image fusion; FRAMEWORK; NEST;
D O I
10.1109/ACCESS.2023.3302702
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visible and infrared image fusion (VIF) aims at remodeling an informative and panoramic image for subsequent image processing or human vision. Due to the widespread application in military and civil fields, the VIF technology has achieved considerable development in recent decades. However, the assignment of weights and the selection of fusion rules seriously restrict the performance improvement of most existing fusion algorithms. In response to this issue, an innovative and efficient VIF model based on convolutional neural network (CNN) is proposed in this paper. Firstly, multi-layer convolution kernel is performed on two source images with a multi-scale manner for extracting the salient image features. Secondly, the extracted feature maps are concatenated along the number of channels. Finally, the fusion feature maps are reconstructed to achieve the fusion images. The main innovation of this paper is to adequately preserve meaningful details and adaptively integrate features information driven by source image information in CNN learning model. In addition, in order to adequately train the network model, we generate a large-scale and high-resolution image training dataset based on COCO dataset. Compared with the existing fusion methods, experiment results indicate that the proposed method not only achieves universally outstanding visual quality and objective metrics but also has some advantages in terms of runtime efficiency compared to other neural network algorithms.
引用
收藏
页码:86413 / 86430
页数:18
相关论文
共 50 条
  • [31] Efficient multi-level cross-modal fusion and detection network for infrared and visible image
    Gao, Hongwei
    Wang, Yutong
    Sun, Jian
    Jiang, Yueqiu
    Gai, Yonggang
    Yu, Jiahui
    [J]. ALEXANDRIA ENGINEERING JOURNAL, 2024, 108 : 306 - 318
  • [32] Infrared and visible image fusion using total variation model
    Ma, Yong
    Chen, Jun
    Chen, Chen
    Fan, Fan
    Ma, Jiayi
    [J]. NEUROCOMPUTING, 2016, 202 : 12 - 19
  • [33] Infrared and Visible Image Fusion via Hybrid Variational Model
    Xia, Zhengwei
    Liu, Yun
    Wang, Xiaoyun
    Zhang, Feiyun
    Chen, Rui
    Jiang, Weiwei
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (04) : 569 - 573
  • [34] Infrared and visible image fusion based on hybrid model driving
    Shen, Yu
    Chen, Xiao-Peng
    Liu, Cheng
    Zhang, Hong-Guo
    Wang, Lin
    [J]. Kongzhi yu Juece/Control and Decision, 2021, 36 (09): : 2143 - 2151
  • [35] Infrared and Visible Image Fusion Method Based on Degradation Model
    Jiang Yichun
    Liu Yunqing
    Zhan Weida
    Zhu Depeng
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (12) : 4405 - 4415
  • [36] Infrared and visible image fusion based on deep Boltzmann model
    Feng Xin
    Li Chuan
    Hu Kai-Qun
    [J]. ACTA PHYSICA SINICA, 2014, 63 (18)
  • [37] MAGAN: Multiattention Generative Adversarial Network for Infrared and Visible Image Fusion
    Huang, Shuying
    Song, Zixiang
    Yang, Yong
    Wan, Weiguo
    Kong, Xiangkai
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [38] Fully convolutional network-based infrared and visible image fusion
    Feng, Yufang
    Lu, Houqing
    Bai, Jingbo
    Cao, Lin
    Yin, Hong
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) : 15001 - 15014
  • [39] RADFNet: An infrared and visible image fusion framework based on distributed network
    Feng, Siling
    Wu, Can
    Lin, Cong
    Huang, Mengxing
    [J]. FRONTIERS IN PLANT SCIENCE, 2023, 13
  • [40] Self-Attention Progressive Network for Infrared and Visible Image Fusion
    Li, Shuying
    Han, Muyi
    Qin, Yuemei
    Li, Qiang
    [J]. REMOTE SENSING, 2024, 16 (18)