VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis

被引:0
|
作者
Guo, Kai [2 ,3 ]
Li, Xiongfei [2 ,3 ]
Fan, Tiehu [1 ]
Hu, Xiaohan [4 ]
机构
[1] Jilin Univ, Coll Instrumentat & Elect Engn, Changchun, Peoples R China
[2] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun, Peoples R China
[3] Jilin Univ, Coll Comp Sci & Technol, Changchun, Peoples R China
[4] First Hosp Jilin Univ, Dept Radiol, Changchun, Peoples R China
基金
产业技术研究与开发资金项目; 中国国家自然科学基金;
关键词
Medical image; Medical image fusion; Attention mechanism; Contextual information; Multi scale feature extraction;
D O I
10.1186/s12859-022-05072-4
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Background: Today's biomedical imaging technology has been able to present the morphological structure or functional metabolic information of organisms at different scale levels, such as organ, tissue, cell, molecule and gene. However, different imaging modes have different application scope, advantages and disadvantages. In order to improve the role of medical image in disease diagnosis, the fusion of biomedical image information at different imaging modes and scales has become an important research direction in medical image. Traditional medical image fusion methods are all designed to measure the activity level and fusion rules. They are lack of mining the context features of different modes of image, which leads to the obstruction of improving the quality of fused images. Method: In this paper, an attention-multiscale network medical image fusion model based on contextual features is proposed. The model selects five backbone modules in the VGG-16 network to build encoders to obtain the contextual features of medical images. It builds the attention mechanism branch to complete the fusion of global contextual features and designs the residual multiscale detail processing branch to complete the fusion of local contextual features. Finally, it completes the cascade reconstruction of features by the decoder to obtain the fused image. Results: Ten sets of images related to five diseases are selected from the AANLIB database to validate the VANet model. Structural images are derived from MR images with high resolution and functional images are derived from SPECT and PET images that are good at describing organ blood flow levels and tissue metabolism. Fusion experiments are performed on twelve fusion algorithms including the VANet model. The model selects eight metrics from different aspects to build a fusion quality evaluation system to complete the performance evaluation of the fused images. Friedman's test and the post-hoc Nemenyi test are introduced to conduct professional statistical tests to demonstrate the superiority of VANet model. Conclusions: The VANet model completely captures and fuses the texture details and color information of the source images. From the fusion results, the metabolism and structural information of the model are well expressed and there is no interference of color information on the structure and texture; in terms of the objective evaluation system, the metric value of the VANet model is generally higher than that of other methods.; in terms of efficiency, the time consumption of the model is acceptable; in terms of scalability, the model is not affected by the input order of source images and can be extended to tri-modal fusion.
引用
收藏
页数:32
相关论文
共 50 条
  • [21] Medical image fusion based on Spiking Cortical Model
    Wang, Rui
    Wu, Yi
    Ding, Mingyue
    Zhang, Xuming
    MEDICAL IMAGING 2013: DIGITAL PATHOLOGY, 2013, 8676
  • [22] NSCT Based Multispectral Medical Image Fusion Model
    Bhateja, Vikrant
    Srivastava, Anuja
    Moin, Aisha
    Lav-Ekuakille, Aime
    2016 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2016, : 453 - 457
  • [23] Text sentiment analysis of fusion model based on attention mechanism
    Deng, Hongjie
    Ergu, Daji
    Liu, Fangyao
    Cai, Ying
    Ma, Bo
    8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT (ITQM 2020 & 2021): DEVELOPING GLOBAL DIGITAL ECONOMY AFTER COVID-19, 2022, 199 : 741 - 748
  • [24] Medical nucleus image segmentation network based on convolution and attention mechanism
    Zhi P.
    Deng J.
    Zhong Z.
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2022, 39 (04): : 730 - 739
  • [25] A Synergic Neural Network For Medical Image Classification Based On Attention Mechanism
    Wang Shanshan
    Zhang Tao
    Li Fei
    Ruan ZhenPing
    Yang Zhen
    Zhan Shu
    Zhang ZhiQiang
    2022 ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING (CACML 2022), 2022, : 82 - 87
  • [26] CFDformer: Medical image segmentation based on cross fusion dual attention network
    Yang, Zhou
    Wang, Hua
    Liu, Yepeng
    Zhang, Fan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 101
  • [27] An Intelligent Model to Assist Medical Diagnosis Based on Tolerant Rough Sets
    Nie, Bin
    Li, Weimin
    Wang, Mingyan
    Li, Nan
    Wang, Juan
    Lu, Qiang
    Li, Bei
    PROGRESS IN INTELLIGENCE COMPUTATION AND APPLICATIONS, 2008, : 228 - 231
  • [28] Image Geolocation Method Based on Attention Mechanism Front Loading and Feature Fusion
    Lu, Huayuan
    Yang, Chunfang
    Qi, Baojun
    Zhu, Ma
    Xu, Jingqian
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [29] MAAFusion: A Multimodal Medical Image Fusion Network Via Arbitrary Kernel Convolution And Attention Mechanism
    Wang, Wenqing
    He, Ji
    Li, Lingzhou
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [30] AFFNet: Attention Mechanism Network Based on Fusion Feature for Image Cloud Removal
    Shen, Runhan
    Zhang, Xiaofeng
    Xiang, Yonggang
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (08)