A multi-channel neural network model for multi-focus image fusion

被引:1
|
作者
Qi, Yunliang [1 ,2 ]
Yang, Zhen [2 ]
Lu, Xiangyu [2 ]
Li, Shouliang [2 ]
Ma, Yide [2 ]
机构
[1] Zhejiang Lab, Hangzhou 311100, Zhejiang, Peoples R China
[2] Lanzhou Univ, Sch Informat Sci Engn, Lanzhou 730000, Gansu, Peoples R China
关键词
Multi-focus image fusion; Visual cortex neural network; Multi-channel; Decision map; Image fusion; INTERSECTING CORTICAL MODEL; QUALITY ASSESSMENT; INFORMATION; ENHANCEMENT; PERFORMANCE; TRANSFORM; ALGORITHM; FRAMEWORK; FIELD; PCNN;
D O I
10.1016/j.eswa.2024.123244
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The objective of multi-focus image fusion (MFIF) is to generate a fully focused image through integrating multiple partially focused source images. Most of the existing methods do not fully consider the local gradient variation rate of the source image, which makes it difficult to accurately distinguish the small defocused (focused) region covered by the large focused (defocused) region. In addition, these methods also cause edge blurring because they do not take into account misregistration of the source images. To address these issues, in this paper, we propose a simple and effective multi-focus image fusion framework based on multi-channel Rybak neural network (MCRYNN) model. Specifically, the proposed MCRYNN model is highly sensitive to local gradient changes of images based on input receptive fields, which can process multiple source images in parallel and extract the features of focused regions. Moreover, the decision maps can accurately be generate in proposed method based on the information interaction effect of parallel network structure for multi-focus image fusion task. Finally, we conduct qualitative and quantitative experiments on public datasets, and the results show that the performance of the proposed method outperforms the state-of-the-art methods.
引用
下载
收藏
页数:19
相关论文
共 50 条
  • [41] Evaluation of focus measures in multi-focus image fusion
    Huang, Wei
    Jing, Zhongliang
    PATTERN RECOGNITION LETTERS, 2007, 28 (04) : 493 - 500
  • [42] DMDN: Degradation model-based deep network for multi-focus image fusion
    Xiao, Yifan
    Guo, Zhixin
    Veelaert, Peter
    Philips, Wilfried
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2022, 101
  • [43] Incrementally Adapting Pretrained Model Using Network Prior for Multi-Focus Image Fusion
    Hu, Xingyu
    Jiang, Junjun
    Wang, Chenyang
    Liu, Xianming
    Ma, Jiayi
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 3950 - 3963
  • [44] Infrared and visible image fusion based on multi-channel convolutional neural network
    Wang, Hongmei
    An, Wenbo
    Li, Lin
    Li, Chenkai
    Zhou, Daming
    IET IMAGE PROCESSING, 2022, 16 (06) : 1575 - 1584
  • [45] Multi-Focus Image Fusion Based on Multi-Scale Generative Adversarial Network
    Ma, Xiaole
    Wang, Zhihai
    Hu, Shaohai
    Kan, Shichao
    ENTROPY, 2022, 24 (05)
  • [46] Salience preserving multi-focus image fusion
    Hong, Richang
    Wang, Chao
    Ge, Yong
    Wang, Meng
    Wu, Xiuqing
    Zhang, Rong
    2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5, 2007, : 1663 - 1666
  • [47] A lightweight scheme for multi-focus image fusion
    Jin, Xin
    Hou, Jingyu
    Nie, Rencan
    Yao, Shaowen
    Zhou, Dongming
    Jiang, Qian
    He, Kangjian
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (18) : 23501 - 23527
  • [48] Multi-focus image fusion algorithm based on pixel-level convolutional neural network
    Shen, Xuan-Jing
    Zhang, Xue-Feng
    Wang, Yu
    Jin, Yu-Bo
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (08): : 1857 - 1864
  • [49] Multi-focus Image Fusion Using Image Morphology
    Disha, Kakaiya
    Kandoriya, Karshan
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2016, 16 (05): : 118 - 122
  • [50] Multi-focus image fusion with convolutional neural network based on Dempster-Shafer theory
    Li L.
    Li C.
    Lu X.
    Wang H.
    Zhou D.
    Optik, 2023, 272