HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease

被引:1
|
作者
Liu X. [1 ]
Li W. [1 ]
Miao S. [1 ]
Liu F. [2 ,3 ,4 ]
Han K. [5 ]
Bezabih T.T. [1 ]
机构
[1] School of Computer Engineering and Science, Shanghai University, Shanghai
[2] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen
[3] University of Chinese Academy of Sciences, Beijing
[4] BGI-Shenzhen, Shenzhen
[5] Medical and Health Center, Liaocheng People's Hospital, LiaoCheng
关键词
Alzheimer's disease; Attention mechanism; Deep learning; Multi-modal fusion; Multi-task learning; Transformer;
D O I
10.1016/j.compbiomed.2024.108564
中图分类号
学科分类号
摘要
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [41] Automatic depression prediction via cross-modal attention-based multi-modal fusion in social networks
    Wang, Lidong
    Zhang, Yin
    Zhou, Bin
    Cao, Shihua
    Hu, Keyong
    Tan, Yunfei
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 118
  • [42] AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion
    Wu, Peishu
    Wang, Zidong
    Zheng, Baixun
    Li, Han
    Alsaadi, Fuad E.
    Zeng, Nianyin
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 152
  • [43] Computer-Aided Multi-Target Management of Emergent Alzheimer's Disease
    Kim, Hyunjo
    Han, Hyunwook
    BIOINFORMATION, 2018, 14 (04) : 167 - 180
  • [44] A cross modal hierarchical fusion multimodal sentiment analysis method based on multi-task learning
    Wang, Lan
    Peng, Junjie
    Zheng, Cangzhi
    Zhao, Tong
    Zhu, Li'an
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [45] Deep Multi-Modal Discriminative and Interpretability Network for Alzheimer's Disease Diagnosis
    Zhu, Qi
    Xu, Bingliang
    Huang, Jiashuang
    Wang, Heyang
    Xu, Ruting
    Shao, Wei
    Zhang, Daoqiang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (05) : 1472 - 1483
  • [46] ENHANCING ALZHEIMER'S DISEASE DIAGNOSIS VIA HIERARCHICAL 3D-FCN WITH MULTI-MODAL FEATURES
    Liu, Chao
    Yang, Xiaodong
    Chong, Dading
    Wang, Wenwu
    Li, Liang
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 304 - 308
  • [47] Vector Quantized Multi-modal Guidance for Alzheimer's Disease Diagnosis Based on Feature Imputation
    Zhang, Yuanwang
    Sun, Kaicong
    Liu, Yuxiao
    Ou, Zaixin
    Shen, Dinggang
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 403 - 412
  • [48] A hierarchical attention-based multimodal fusion framework for predicting the progression of Alzheimer's disease
    Lu, Peixin
    Hu, Lianting
    Mitelpunkt, Alexis
    Bhatnagar, Surbhi
    Lu, Long
    Liang, Huiying
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 88
  • [49] Deep Self-Reconstruction Fusion Similarity Hashing for the Diagnosis of Alzheimer's Disease on Multi-Modal Data
    Wu, Tian-Ru
    Jiao, Cui-Na
    Cui, Xinchun
    Wang, Yan-Li
    Zheng, Chun-Hou
    Liu, Jin-Xing
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (06) : 3513 - 3522
  • [50] Attention-based multi-modal fusion for improved real estate appraisal: a case study in Los Angeles
    Junchi Bin
    Bryan Gardiner
    Zheng Liu
    Eric Li
    Multimedia Tools and Applications, 2019, 78 : 31163 - 31184