Advancing infrared and visible image fusion with an enhanced multiscale encoder and attention-based networks

被引:0
|
作者
Wang, Jiashuo [1 ,3 ,4 ]
Chen, Yong [2 ]
Sun, Xiaoyun [2 ,4 ]
Xing, Hui [2 ]
Zhang, Fan [1 ]
Song, Shiji [2 ]
Yu, Shuyong [3 ]
机构
[1] Shijiazhuang Tiedao Univ, Sch Mech Engn, Shijiazhuang 050043, Hebei, Peoples R China
[2] Shijiazhuang Tiedao Univ, Sch Elect & Elect Engn, Shijiazhuang 050043, Hebei, Peoples R China
[3] Beijing Railway Signal Co Ltd, Beijing 102613, Peoples R China
[4] Shijiazhuang Tiedao Univ, Hebei Prov Collaborat Innovat Ctr Transportat Powe, Shijiazhuang 050043, Hebei, Peoples R China
关键词
NEST;
D O I
10.1016/j.isci.2024.110915
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Infrared and visible image fusion aims to produce images that highlight key targets and offer distinct textures, by merging the thermal radiation infrared images with the detailed texture visible images. Traditional auto encoder-decoder-based fusion methods often rely on manually designed fusion strategies, which lack flexibility across different scenarios. Addressing this limitation, we introduce EMAFusion, a fusion approach featuring an enhanced multiscale encoder and a learnable, lightweight fusion network. Our method incorporates skip connections, the convolutional block attention module (CBAM), and nest architecture within the auto encoder-decoder framework to adeptly extract and preserve multiscale features for fusion tasks. Furthermore, a fusion network driven by spatial and channel attention mechanisms is proposed, designed to precisely capture and integrate essential features from both image types. Comprehensive evaluations of the TNO image fusion dataset affirm the proposed method's superiority over existing state-of-the-art techniques, demonstrating its potential for advancing infrared and visible image fusion.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Feng, Qi
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 1383 - 1396
  • [2] Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion
    Wang, Lei
    Hu, Ziming
    Kong, Quan
    Qi, Qian
    Liao, Qing
    [J]. ENTROPY, 2023, 25 (03)
  • [3] Attention-based for Multiscale Fusion Underwater Image Enhancement
    Huang, Zhixiong
    Li, Jinjiang
    Hua, Zhen
    [J]. KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2022, 16 (02): : 544 - 564
  • [4] MAFusion: Multiscale Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Chen, Houjin
    Li, Yanfeng
    Peng, Yahui
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [5] Multiscale channel attention network for infrared and visible image fusion
    Zhu, Jiahui
    Dou, Qingyu
    Jian, Lihua
    Liu, Kai
    Hussain, Farhan
    Yang, Xiaomin
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2021, 33 (22):
  • [6] Attention-based hierarchical fusion of visible and infrared images
    Chen, Yanfei
    Sang, Nong
    [J]. OPTIK, 2015, 126 (23): : 4243 - 4248
  • [7] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Li Gao
    DeLin Luo
    Song Wang
    [J]. Science China Technological Sciences, 2024, 67 : 408 - 422
  • [8] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    GAO Li
    LUO DeLin
    WANG Song
    [J]. Science China Technological Sciences, 2024, (02) : 408 - 422
  • [9] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Gao, Li
    Luo, Delin
    Wang, Song
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2024, 67 (02) : 408 - 422
  • [10] MCFusion: infrared and visible image fusion based multiscale receptive field and cross-modal enhanced attention mechanism
    Jiang, Min
    Wang, Zhiyuan
    Kong, Jun
    Zhuang, Danfeng
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (01)