Multimodal medical image fusion based on interval gradients and convolutional neural networks

被引:0
|
作者
Gu, Xiaolong [1 ,2 ]
Xia, Ying [1 ]
Zhang, Jie [2 ]
机构
[1] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing, Peoples R China
[2] Chongqing Technol & Business Univ, Natl Res Base Intelligent Mfg Serv, Chongqing, Peoples R China
来源
BMC MEDICAL IMAGING | 2024年 / 24卷 / 01期
基金
中国国家自然科学基金;
关键词
Physiological information; Metabolic information; Interval gradient; Convolutional neural network; Perception image; COMPLEX WAVELET TRANSFORM;
D O I
10.1186/s12880-024-01418-x
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Many image fusion methods have been proposed to leverage the advantages of functional and anatomical images while compensating for their shortcomings. These methods integrate functional and anatomical images while presenting physiological and metabolic organ information, making their diagnostic efficiency far greater than that of single-modal images. Currently, most existing multimodal medical imaging fusion methods are based on multiscale transformation, which involves obtaining pyramid features through multiscale transformation. Low-resolution images are used to analyse approximate image features, and high-resolution images are used to analyse detailed image features. Different fusion rules are applied to achieve feature fusion at different scales. Although these fusion methods based on multiscale transformation can effectively achieve multimodal medical image fusion, much detailed information is lost during multiscale and inverse transformation, resulting in blurred edges and a loss of detail in the fusion images. A multimodal medical image fusion method based on interval gradients and convolutional neural networks is proposed to overcome this problem. First, this method uses interval gradients for image decomposition to obtain structure and texture images. Second, deep neural networks are used to extract perception images. Three methods are used to fuse structure, texture, and perception images. Last, the images are combined to obtain the final fusion image after colour transformation. Compared with the reference algorithms, the proposed method performs better in multiple objective indicators of QEN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{EN}$$\end{document}, QNIQE\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{NIQE}$$\end{document}, QSD\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{SD}$$\end{document}, QSSEQ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{SSEQ}$$\end{document} and QTMQI\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_{TMQI}$$\end{document}.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A Medical Image Fusion Method Based on Convolutional Neural Networks
    Liu, Yu
    Chen, Xun
    Cheng, Juan
    Peng, Hu
    [J]. 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2017, : 1070 - 1076
  • [2] Research on Multimodal Medical Image Fusion Method Based on Fully Convolutional Neural Network
    Guo, Pengwei
    Yu, Shun
    [J]. ASIA-PACIFIC JOURNAL OF CLINICAL ONCOLOGY, 2023, 19 : 20 - 20
  • [3] Convolutional Neural Networks and Multimodal Fusion for Text Aided Image Classification
    Wang, Dongzhe
    Mao, Kezhi
    Ng, Gee-Wah
    [J]. 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2017, : 1063 - 1069
  • [4] An Efficient Medical Image Deep Fusion Model Based on Convolutional Neural Networks
    El-Shafai, Walid
    El-Hag, Noha A.
    Sedik, Ahmed
    Elbanby, Ghada
    Abd El-Samie, Fathi E.
    Soliman, Naglaa F.
    AlEisa, Hussah Nasser
    Samea, Mohammed E. Abdel
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 74 (02): : 2905 - 2925
  • [5] TWO-PHASE MULTIMODAL IMAGE FUSION USING CONVOLUTIONAL NEURAL NETWORKS
    Kusram, Kushal
    Transue, Shane
    Choi, Min-Hyung
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 1874 - 1878
  • [6] Multimodal medical image fusion using convolutional neural network and extreme learning machine
    Kong, Weiwei
    Li, Chi
    Lei, Yang
    [J]. FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [7] Speckle noise removal based on structural convolutional neural networks with feature fusion for medical image
    Li, Dazi
    Yu, Wenjie
    Wang, Kunfeng
    Jiang, Daozhong
    Jin, Qibing
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2021, 99
  • [8] Medical image fusion based on convolutional neural networks and non-subsampled contourlet transform
    Wang, Zeyu
    Li, Xiongfei
    Duan, Haoran
    Su, Yanchi
    Zhang, Xiaoli
    Guan, Xinjiang
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2021, 171
  • [9] The Multimodal Brain Tumor Image Segmentation Based On Convolutional Neural Networks
    Wang Mengqiao
    Yang Jie
    Chen Yilei
    Wang Hao
    [J]. 2017 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA), 2017, : 336 - 339
  • [10] Multimodal Convolutional Neural Networks for Matching Image and Sentence
    Ma, Lin
    Lu, Zhengdong
    Shang, Lifeng
    Li, Hang
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2623 - 2631