LIALFP: Multi-band images synchronous fusion model based on latent information association and local feature preserving

被引:4
|
作者
Wang, Bin [1 ,4 ]
Zhao, Qian [2 ]
Bai, Guifeng [1 ]
Zeng, Jianchao [1 ]
Xie, Shiyun [3 ]
Wen, Leihua [4 ]
机构
[1] North Univ China, Dept Data Sci & Technol, Taiyuan 030051, Peoples R China
[2] Shanxi Engn Vocat Coll, Coal Training Ctr, Taiyuan 030051, Peoples R China
[3] Beijing Univ Posts & Telecommun, Int Coll, Beijing 102209, Peoples R China
[4] Zhongke Ruixin Beijing Sci Tech Co Ltd, Beijing 100089, Peoples R China
关键词
Image Fusion; Multi-band Images; Representation Learning; Laplacian Matrix; GENERATIVE ADVERSARIAL NETWORK; FOCUS IMAGE; NEURAL-NETWORK; NEST;
D O I
10.1016/j.infrared.2021.103975
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
The fusion of multi-band images, i.e., far-infrared image (FIRI), near-infrared image (NIRI) and visible image (VISI), is mainly confronted with three-sided challenges: One is whether image fusion process could be performed synchronously. Most of existing algorithms are aimed at two fusion targets, which makes them have to adopt sequential way to merge multiple images. Unfortunately, this may make their results vulnerable to ambiguity or artifact. The second is the ground truth of fused results could not be obtained at all in some image fusion fields (e.g., multi-band images). That leads immediately to the failure of supervised methods to give full play to their advantages. Third, the latent projection between the result and source images is often not directly considered. Notably, the relation involves not only the fusion result with all inputs, but also with each original. In order to solve aforementioned problems, this paper establishes a unsupervised representation learning model for synchronous multi-band images fusion. First, significant pixel fusion features are extracted to ensure that the primary information can be integrated. Secondly, the potential relationship between the result and the whole originals is assumed to be the linear mapping, reducing the unpredictability of these fusion results. In addition, these transformation matrices have been given the function of feature selection, which could choose discriminant features and project them into the fusion space. Then, the locally significant features of each source are captured by designed graph Laplacian matrix. Finally, experiments show the rationality and superiority of our algorithm through comparison with a variety of recent advanced algorithms from subjective judgment and objective indicators.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information
    Zhu, Jinbiao
    Pan, Jie
    Jiang, Wen
    Yue, Xijuan
    Yin, Pengyu
    REMOTE SENSING, 2022, 14 (09)
  • [22] Research on the measurement of CO2 concentration based on multi-band fusion model
    Honglian Li
    Shuai Di
    Wenjing Lv
    Yaqing Jia
    Shijie Fu
    Lide Fang
    Applied Physics B, 2021, 127
  • [23] Research on the measurement of CO2 concentration based on multi-band fusion model
    Li, Honglian
    Di, Shuai
    Lv, Wenjing
    Jia, Yaqing
    Fu, Shijie
    Fang, Lide
    APPLIED PHYSICS B-LASERS AND OPTICS, 2021, 127 (01):
  • [24] Link Prediction based on Deep Latent Feature Model by Fusion of Network Hierarchy Information
    Cai, Fei
    Chen, Jie
    Zhang, Xin
    Mou, Xiaohui
    Zhu, Rongrong
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2020, 27 (03): : 912 - 922
  • [25] Multi-Band Image Synchronous Super-Resolution and Fusion Method Based on Improved WGAN-GP
    Tian S.
    Lin S.
    Lei H.
    Li D.
    Wang L.
    Guangxue Xuebao/Acta Optica Sinica, 2020, 40 (20):
  • [26] Research on fusion schemes of multi-band color night vision images based on opponent vision property
    Dept. of Optical Engineering, Beijing Institute of Technology, Beijing 100081, China
    Hongwai Yu Haomibo Xuebao, 2006, 6 (455-459):
  • [27] Research on fusion schemes of multi-band color night vision images based on opponent vision property
    Wnag Ling-Xue
    Jin Wei-Qi
    Shi Jun-Sheng
    Wang Sheng-Xiang
    Wang Xia
    JOURNAL OF INFRARED AND MILLIMETER WAVES, 2006, 25 (06) : 455 - 459
  • [28] Multi-Band Image Synchronous Super-Resolution and Fusion Method Based on Improved WGAN-GP
    Tian Songwang
    Lin Suzhen
    Lei Haiwei
    Li Dawei
    Wang Lifang
    ACTA OPTICA SINICA, 2020, 40 (20)
  • [29] Face Model Fitting based on Machine Learning from Multi-band Images of Facial Components
    Wimmer, Matthias
    Mayer, Christoph
    Stulp, Freek
    Radig, Bernd
    2008 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, VOLS 1-3, 2008, : 1007 - +
  • [30] Passive tracking based on data association with information fusion of multi-feature and multi-target
    Wang, JG
    Luo, JQ
    Lv, JM
    PROCEEDINGS OF 2003 INTERNATIONAL CONFERENCE ON NEURAL NETWORKS & SIGNAL PROCESSING, PROCEEDINGS, VOLS 1 AND 2, 2003, : 686 - 689