VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis

被引:0
|
作者
Guo, Kai [2 ,3 ]
Li, Xiongfei [2 ,3 ]
Fan, Tiehu [1 ]
Hu, Xiaohan [4 ]
机构
[1] Jilin Univ, Coll Instrumentat & Elect Engn, Changchun, Peoples R China
[2] Jilin Univ, Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun, Peoples R China
[3] Jilin Univ, Coll Comp Sci & Technol, Changchun, Peoples R China
[4] First Hosp Jilin Univ, Dept Radiol, Changchun, Peoples R China
基金
产业技术研究与开发资金项目; 中国国家自然科学基金;
关键词
Medical image; Medical image fusion; Attention mechanism; Contextual information; Multi scale feature extraction;
D O I
10.1186/s12859-022-05072-4
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Background: Today's biomedical imaging technology has been able to present the morphological structure or functional metabolic information of organisms at different scale levels, such as organ, tissue, cell, molecule and gene. However, different imaging modes have different application scope, advantages and disadvantages. In order to improve the role of medical image in disease diagnosis, the fusion of biomedical image information at different imaging modes and scales has become an important research direction in medical image. Traditional medical image fusion methods are all designed to measure the activity level and fusion rules. They are lack of mining the context features of different modes of image, which leads to the obstruction of improving the quality of fused images. Method: In this paper, an attention-multiscale network medical image fusion model based on contextual features is proposed. The model selects five backbone modules in the VGG-16 network to build encoders to obtain the contextual features of medical images. It builds the attention mechanism branch to complete the fusion of global contextual features and designs the residual multiscale detail processing branch to complete the fusion of local contextual features. Finally, it completes the cascade reconstruction of features by the decoder to obtain the fused image. Results: Ten sets of images related to five diseases are selected from the AANLIB database to validate the VANet model. Structural images are derived from MR images with high resolution and functional images are derived from SPECT and PET images that are good at describing organ blood flow levels and tissue metabolism. Fusion experiments are performed on twelve fusion algorithms including the VANet model. The model selects eight metrics from different aspects to build a fusion quality evaluation system to complete the performance evaluation of the fused images. Friedman's test and the post-hoc Nemenyi test are introduced to conduct professional statistical tests to demonstrate the superiority of VANet model. Conclusions: The VANet model completely captures and fuses the texture details and color information of the source images. From the fusion results, the metabolism and structural information of the model are well expressed and there is no interference of color information on the structure and texture; in terms of the objective evaluation system, the metric value of the VANet model is generally higher than that of other methods.; in terms of efficiency, the time consumption of the model is acceptable; in terms of scalability, the model is not affected by the input order of source images and can be extended to tri-modal fusion.
引用
收藏
页数:32
相关论文
共 50 条
  • [1] VANet: a medical image fusion model based on attention mechanism to assist disease diagnosis
    Kai Guo
    Xiongfei Li
    Tiehu Fan
    Xiaohan Hu
    BMC Bioinformatics, 23
  • [2] Chaotic medical image encryption method using attention mechanism fusion ResNet model
    Li, Xiaowu
    Peng, Huiling
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [3] Hybrid attention mechanism of feature fusion for medical image segmentation
    Tong, Shanshan
    Zuo, Zhentao
    Liu, Zuxiang
    Sun, Dengdi
    Zhou, Tiangang
    IET IMAGE PROCESSING, 2024, 18 (01) : 77 - 87
  • [4] Grape Disease Recognition Model Based on Attention Mechanism and Feature Fusion
    Jia L.
    Ye Z.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2023, 54 (07): : 223 - 233
  • [5] AMMNet: A multimodal medical image fusion method based on an attention mechanism and MobileNetV3
    Di, Jing
    Guo, Wenqing
    Liu, Jizhao
    Ren, Li
    Lian, Jing
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 96
  • [6] AF-Net: A Medical Image Segmentation Network Based on Attention Mechanism and Feature Fusion
    Hou, Guimin
    Qin, Jiaohua
    Xiang, Xuyu
    Tan, Yun
    Xiong, Neal N.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 69 (02): : 1877 - 1891
  • [7] Remote Sensing Image Fusion Method Based on Retinex Model and Hybrid Attention Mechanism
    Ye, Yongxu
    Wang, Tingting
    Fang, Faming
    Zhang, Guixu
    SPACE INFORMATION NETWORKS, SINC 2023, 2024, 2057 : 68 - 82
  • [8] Feature Fusion Network Model Based on Dual Attention Mechanism for Hyperspectral Image Classification
    Cui, Ying
    Li, WenShan
    Chen, Liwei
    Wang, Liguo
    Jiang, Jing
    Gao, Shan
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [9] Research on Multi-model Fusion Algorithm for Image Dehazing Based on Attention Mechanism
    Cui, Tong
    Zhang, Meng
    Ge, Silin
    Chen, Xuhao
    INTELLIGENT ROBOTICS AND APPLICATIONS (ICIRA 2022), PT II, 2022, 13456 : 523 - 535
  • [10] Research on Medical Image Classification Based on Triple Fusion Attention
    Wang, Y. G.
    Wang, L.
    Geng, Y. X.
    ENGINEERING LETTERS, 2025, 33 (01) : 124 - 131