HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease

被引:1
|
作者
Liu X. [1 ]
Li W. [1 ]
Miao S. [1 ]
Liu F. [2 ,3 ,4 ]
Han K. [5 ]
Bezabih T.T. [1 ]
机构
[1] School of Computer Engineering and Science, Shanghai University, Shanghai
[2] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen
[3] University of Chinese Academy of Sciences, Beijing
[4] BGI-Shenzhen, Shenzhen
[5] Medical and Health Center, Liaocheng People's Hospital, LiaoCheng
关键词
Alzheimer's disease; Attention mechanism; Deep learning; Multi-modal fusion; Multi-task learning; Transformer;
D O I
10.1016/j.compbiomed.2024.108564
中图分类号
学科分类号
摘要
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [21] A multi-modal fusion framework based on multi-task correlation learning for cancer prognosis prediction
    Tan, Kaiwen
    Huang, Weixian
    Liu, Xiaofeng
    Hu, Jinlong
    Dong, Shoubin
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2022, 126
  • [22] Diagnosis of Alzheimer's Disease by Canonical Correlation Analysis Based Fusion of Multi-Modal Medical Images
    Baninajjar, Anahita
    Soltanian-Zadeh, Hamid
    Rezaie, Sajad
    Mohammadi-Nejad, Ali-Reza
    2020 INTERNATIONAL CONFERENCE ON E-HEALTH AND BIOENGINEERING (EHB), 2020,
  • [23] AMSF: attention-based multi-view slice fusion for early diagnosis of Alzheimer's disease
    Zhang, Yameng
    Peng, Shaokang
    Xue, Zhihua
    Zhao, Guohua
    Li, Qing
    Zhu, Zhiyuan
    Gao, Yufei
    Kong, Lingfei
    PEERJ COMPUTER SCIENCE, 2023, 9
  • [24] Computer-aided prognosis: Predicting patient and disease outcome via quantitative fusion of multi-scale, multi-modal data
    Madabhushi, Anant
    Agner, Shannon
    Basavanhally, Ajay
    Doyle, Scott
    Lee, George
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2011, 35 (7-8) : 506 - 514
  • [25] Latent Edge Guided Depth Super-Resolution Using Attention-Based Hierarchical Multi-Modal Fusion
    Lan, Hui
    Jung, Cheolkon
    IEEE ACCESS, 2024, 12 : 114512 - 114526
  • [26] Attention-Based Multi-Modal Multi-View Fusion Approach for Driver Facial Expression Recognition
    Chen, Jianrong
    Dey, Sujit
    Wang, Lei
    Bi, Ning
    Liu, Peng
    IEEE ACCESS, 2024, 12 : 137203 - 137221
  • [27] Object Interaction Recommendation with Multi-Modal Attention-based Hierarchical Graph Neural Network
    Zhang, Huijuan
    Liang, Lipeng
    Wang, Dongqing
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 295 - 305
  • [28] Subclass-based multi-task learning for Alzheimer's disease diagnosis
    Suk, Heung-Il
    Lee, Seong-Whan
    Shen, Dinggang
    FRONTIERS IN AGING NEUROSCIENCE, 2014, 6 : 1 - 20
  • [29] MM-HiFuse: multi-modal multi-task hierarchical feature fusion for esophagus cancer staging and differentiation classification
    Huo, Xiangzuo
    Tian, Shengwei
    Yu, Long
    Zhang, Wendong
    Li, Aolun
    Yang, Qimeng
    Song, Jinmiao
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)
  • [30] Hierarchical multi-modal fusion FCN with attention model for RGB-D tracking
    Jiang, Ming-xin
    Deng, Chao
    Shan, Jing-song
    Wang, Yuan-yuan
    Jia, Yin-jie
    Sun, Xing
    INFORMATION FUSION, 2019, 50 : 1 - 8