HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease

被引:1
|
作者
Liu X. [1 ]
Li W. [1 ]
Miao S. [1 ]
Liu F. [2 ,3 ,4 ]
Han K. [5 ]
Bezabih T.T. [1 ]
机构
[1] School of Computer Engineering and Science, Shanghai University, Shanghai
[2] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen
[3] University of Chinese Academy of Sciences, Beijing
[4] BGI-Shenzhen, Shenzhen
[5] Medical and Health Center, Liaocheng People's Hospital, LiaoCheng
关键词
Alzheimer's disease; Attention mechanism; Deep learning; Multi-modal fusion; Multi-task learning; Transformer;
D O I
10.1016/j.compbiomed.2024.108564
中图分类号
学科分类号
摘要
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [1] Multi-task & Multi-modal Sentiment Analysis Model Based on Aware Fusion
    Wu S.
    Ma J.
    Data Analysis and Knowledge Discovery, 2023, 7 (10) : 74 - 84
  • [2] An attention-based multi-modal MRI fusion model for major depressive disorder diagnosis
    Zheng, Guowei
    Zheng, Weihao
    Zhang, Yu
    Wang, Junyu
    Chen, Miao
    Wang, Yin
    Cai, Tianhong
    Yao, Zhijun
    Hu, Bin
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (06)
  • [3] Attention-based multi-modal fusion sarcasm detection
    Liu, Jing
    Tian, Shengwei
    Yu, Long
    Long, Jun
    Zhou, Tiejun
    Wang, Bo
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 44 (02) : 2097 - 2108
  • [4] MULTI-MODAL HIERARCHICAL ATTENTION-BASED DENSE VIDEO CAPTIONING
    Munusamy, Hemalatha
    Sekhar, Chandra C.
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 475 - 479
  • [5] Multi-Modal Multi-Task Learning for Joint Prediction of Clinical Scores in Alzheimer's Disease
    Zhang, Daoqiang
    Shen, Dinggang
    MULTIMODAL BRAIN IMAGE ANALYSIS, 2011, 7012 : 60 - 67
  • [6] Multi-modal neuroimaging feature fusion for diagnosis of Alzheimer's disease
    Zhang, Tao
    Shi, Mingyang
    JOURNAL OF NEUROSCIENCE METHODS, 2020, 341
  • [7] Multi-modal global- and local- feature interaction with attention-based mechanism for diagnosis of Alzheimer's disease
    Jia, Nana
    Jia, Tong
    Zhao, Li
    Ma, Bowen
    Zhu, Zheyi
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 95
  • [8] MBFusion: Multi-modal balanced fusion and multi-task learning for cancer diagnosis and prognosis
    Zhang, Ziye
    Yin, Wendong
    Wang, Shijin
    Zheng, Xiaorou
    Dong, Shoubin
    Computers in Biology and Medicine, 2024, 181
  • [9] Multi-task Classification Model Based On Multi-modal Glioma Data
    Li, Jialun
    Jin, Yuanyuan
    Yu, Hao
    Wang, Xiaoling
    Zhuang, Qiyuan
    Chen, Liang
    11TH IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG 2020), 2020, : 165 - 172
  • [10] Multi-modal cross-attention network for Alzheimer's disease diagnosis with multi data
    Zhang, Jin
    He, Xiaohai
    Liu, Yan
    Cai, Qingyan
    Chen, Honggang
    Qing, Linbo
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 162