Semantic perceptive infrared and visible image fusion Transformer

被引:4
|
作者
Yang, Xin [1 ,2 ]
Huo, Hongtao [1 ]
Li, Chang [3 ]
Liu, Xiaowen [1 ]
Wang, Wenxi [1 ]
Wang, Cheng [1 ]
机构
[1] Peoples Publ Secur Univ China, Sch Informat Technol & Cyber Secur, Beijing 100038, Peoples R China
[2] Yunnan Police Coll, Informat Secur Coll, Kunming 650221, Peoples R China
[3] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
基金
中国国家自然科学基金;
关键词
Infrared image; Visible image; Transformer; Long-range dependency; Local feature; Semantic perceptive; Image fusion; GENERATIVE ADVERSARIAL NETWORK;
D O I
10.1016/j.patcog.2023.110223
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning based fusion mechanisms have achieved sophisticated performance in the field of image fusion. However, most existing approaches focus on learning global and local features but seldom consider to modeling semantic information, which might result in inadequate source information preservation. In this work, we propose a semantic perceptive infrared and visible image fusion Transformer (SePT). The proposed SePT extracts local feature through convolutional neural network (CNN) based module and learns long-range dependency through Transformer based modules, and meanwhile designs two semantic modeling modules based on Transformer architecture to manage high-level semantic information. One semantic modeling module maps the shallow features of source images into deep semantic, the other learns the deep semantic information in different receptive fields. The final fused results are recovered from the combination of local feature, long-range dependency and semantic feature. Extensive comparison experiments demonstrate the superiority of SePT compare to other advanced fusion approaches.
引用
收藏
页数:15
相关论文
共 50 条
  • [41] SBIT-Fuse: Infrared and visible image fusion based on Symmetrical Bilateral interaction and Transformer
    Li, Bicao
    Lu, Jiaxi
    Liu, Zhoufeng
    Shao, Zhuhong
    Li, Chunlei
    Liu, Xilin
    Zhang, Jie
    Zhu, Xiya
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2024, 138
  • [42] SDTFusion: A split-head dense transformer based network for infrared and visible image fusion
    Pang, Shan
    Huo, Hongtao
    Liu, Xiaowen
    Zheng, Bowen
    Li, Jing
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2024, 138
  • [43] Infrared and Visible Image Fusion Algorithm Based on Improved Residual Swin Transformer and Sobel Operators
    Luo, Yongyu
    Luo, Zhongqiang
    [J]. IEEE ACCESS, 2024, 12 : 82134 - 82145
  • [44] Infrared and Visible Image Fusion with Hybrid Image Filtering
    Zhang, Yongxin
    Li, Deguang
    Zhu, WenPeng
    [J]. MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [45] SDFuse: Semantic-injected dual-flow learning for infrared and visible image fusion
    Wang, Enlong
    Li, Jiawei
    Lei, Jia
    Liu, Jinyuan
    Zhou, Shihua
    Wang, Bin
    Kasabov, Nikola K.
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 252
  • [46] DBSD: DUAL branches network using semantic and detail information for infrared and visible image fusion
    Wang, Xuejiao
    Hua, Zhen
    Li, Jinjiang
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [47] A Review on Infrared and Visible Image Fusion Techniques
    Patel, Ami
    Chaudhary, Jayesh
    [J]. INTELLIGENT COMMUNICATION TECHNOLOGIES AND VIRTUAL MOBILE NETWORKS, ICICV 2019, 2020, 33 : 127 - 144
  • [48] Infrared and visible image fusion for face recognition
    Singh, S
    Gyaourova, A
    Bebis, G
    Pavlidis, L
    [J]. BIOMETRIC TECHNOLOGY FOR HUMAN IDENTIFICATION, 2004, 5404 : 585 - 596
  • [49] TCPMFNet: An infrared and visible image fusion network with composite auto encoder and transformer–convolutional parallel mixed fusion strategy
    Yi, Shi
    Jiang, Gang
    Liu, Xi
    Li, Junjie
    Chen, Ling
    [J]. Infrared Physics and Technology, 2022, 127
  • [50] Reflectance estimation for infrared and visible image fusion
    Gu, Yan
    Yang, Feng
    Zhao, Weijun
    Guo, Yiliang
    Min, Chaobo
    [J]. KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (08): : 2749 - 2763