Multi-level receptive field feature reuse for multi-focus image fusion

被引:0
|
作者
Jiang, Limai [1 ,2 ,3 ]
Fan, Hui [4 ]
Li, Jinjiang [3 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
[2] Univ Chinese Acad Sci, Shenzhen Coll Adv Technol, Shenzhen, Peoples R China
[3] Shandong Technol & Business Univ, Sch Comp Sci & Technol, Yantai, Peoples R China
[4] Coinnovat Ctr Shandong Coll & Univ Future Intelli, Yantai, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-focus image fusion; Deep learning; Regression model; Feature reuse; GENERATIVE ADVERSARIAL NETWORK; CONVOLUTIONAL NEURAL-NETWORK; GAN;
D O I
10.1007/s00138-022-01345-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-focus image fusion, which is the fusion of two or more images focused on different targets into one clear image, is a worthwhile problem in digital image processing. Traditional methods are usually based on frequency domain or space domain, but they cannot guarantee the accurate measurement of all the image details of the activity level, and also cannot perfect the selection of image fusion rules. Therefore, the deep learning method with strong feature representation ability is called the mainstream of multi-focus image fusion. However, until now, most of the deep learning frameworks have not balanced the relationship between the two input features, the shallow features and the feature fusion. In order to improve the defects of previous work, we propose an end-to-end deep network, which includes an encoder and a decoder. Encoder is a pseudo-Siamese network. It extracts the same and different feature sets by using the features of double encoder, then reuses the shallow features and finally forms the coding. In decoder, the coding will be analyzed and dimensionally reduced enough to generate high-quality fusion image. We carried out extensive experiments. The results show that our network structure is better. Compared with various image fusion methods based on deep learning and traditional multi-focus image fusion methods in recent years, our method is slightly better than theirs in both objective metric contrast and subjective visual contrast.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Multi-level receptive field feature reuse for multi-focus image fusion
    Limai Jiang
    Hui Fan
    Jinjiang Li
    [J]. Machine Vision and Applications, 2022, 33
  • [2] MLDNet: Multi-level dense network for multi-focus image fusion
    Mustafa, Hafiz Tayyab
    Zareapoor, Masoumeh
    Yang, Jie
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 85
  • [3] A measure for the evaluation of multi-focus image fusion at feature level
    Yuncong Feng
    Rui Guo
    Xuanjing Shen
    Xiaoli Zhang
    [J]. Multimedia Tools and Applications, 2022, 81 : 18053 - 18071
  • [4] A measure for the evaluation of multi-focus image fusion at feature level
    Feng, Yuncong
    Guo, Rui
    Shen, Xuanjing
    Zhang, Xiaoli
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (13) : 18053 - 18071
  • [5] Improved Multi-Focus Image Fusion
    Jameel, Amina
    Noor, Fouzia
    [J]. 2015 18TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2015, : 1346 - 1352
  • [6] A Multi-focus Image Fusion Classifier
    Siddiqui, Abdul Basit
    Rashid, Muhammad
    Jaffar, M. Arfan
    Hussain, Ayyaz
    Mirza, Anwar M.
    [J]. INFORMATION-AN INTERNATIONAL INTERDISCIPLINARY JOURNAL, 2012, 15 (04): : 1757 - 1764
  • [7] Image registration for multi-focus image fusion
    Zhang, Z
    Blum, RS
    [J]. BATTLESPACE DIGITIZATION AND NETWORK-CENTRIC WARFARE, 2001, 4396 : 279 - 290
  • [8] Multi-focus thermal image fusion
    Benes, Radek
    Dvorak, Pavel
    Faundez-Zanuy, Marcos
    Espinosa-Duro, Virginia
    Mekyska, Jiri
    [J]. PATTERN RECOGNITION LETTERS, 2013, 34 (05) : 536 - 544
  • [9] Multi-image transformer for multi-focus image fusion
    Karacan, Levent
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2023, 119
  • [10] Multi-focus image fusion: Transformer and shallow feature attention matters
    Wu, Pan
    Jiang, Limai
    Hua, Zhen
    Li, Jinjiang
    [J]. DISPLAYS, 2023, 76