HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease

被引:1
|
作者
Liu X. [1 ]
Li W. [1 ]
Miao S. [1 ]
Liu F. [2 ,3 ,4 ]
Han K. [5 ]
Bezabih T.T. [1 ]
机构
[1] School of Computer Engineering and Science, Shanghai University, Shanghai
[2] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen
[3] University of Chinese Academy of Sciences, Beijing
[4] BGI-Shenzhen, Shenzhen
[5] Medical and Health Center, Liaocheng People's Hospital, LiaoCheng
关键词
Alzheimer's disease; Attention mechanism; Deep learning; Multi-modal fusion; Multi-task learning; Transformer;
D O I
10.1016/j.compbiomed.2024.108564
中图分类号
学科分类号
摘要
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance. © 2024 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [31] Incomplete multi-modal representation learning for Alzheimer's disease diagnosis
    Liu, Yanbei
    Fan, Lianxi
    Zhang, Changqing
    Zhou, Tao
    Xiao, Zhitao
    Geng, Lei
    Shen, Dinggang
    MEDICAL IMAGE ANALYSIS, 2021, 69
  • [32] Research on Computer-Aided Diagnosis of Alzheimer's Disease Based on Heterogeneous Medical Data Fusion
    Dai, Yin
    Qiu, Daoyun
    Wang, Yang
    Dong, Sizhe
    Wang, Hong-Li
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2019, 33 (05)
  • [33] Disease Classification Model Based on Multi-Modal Feature Fusion
    Wan, Zhengyu
    Shao, Xinhui
    IEEE ACCESS, 2023, 11 : 27536 - 27545
  • [34] Improving Alzheimer's Disease Diagnosis With Multi-Modal PET Embedding Features by a 3D Multi-Task MLP-Mixer Neural Network
    Zhang, Zi-Chao
    Zhao, Xingzhong
    Dong, Guiying
    Zhao, Xing-Ming
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (08) : 4040 - 4051
  • [35] Automatic diagnosis of multi-task in essential tremor: Dynamic handwriting analysis using multi-modal fusion neural network
    Ma, Chenbin
    Ma, Yulan
    Pan, Longsheng
    Li, Xuemei
    Yin, Chunyu
    Zong, Rui
    Zhang, Zhengbo
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 145 : 429 - 441
  • [36] COMPUTER-AIDED PROGNOSIS: PREDICTING PATIENT AND DISEASE OUTCOME VIA MULTI-MODAL IMAGE ANALYSIS
    Madabhushi, Anant
    Basavanhally, Ajay
    Doyle, Scott
    Agner, Shannon
    Lee, George
    2010 7TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, 2010, : 1415 - 1418
  • [37] Multi-modal classification of Alzheimer's disease using nonlinear graph fusion
    Tong, Tong
    Gray, Katherine
    Gao, Qinquan
    Chen, Liang
    Rueckert, Daniel
    PATTERN RECOGNITION, 2017, 63 : 171 - 181
  • [38] Multi-modal Neuroimaging Data Fusion via Latent Space Learning for Alzheimer's Disease Diagnosis
    Zhou, Tao
    Thung, Kim-Han
    Liu, Mingxia
    Shi, Feng
    Zhang, Changqing
    Shen, Dinggang
    PREDICTIVE INTELLIGENCE IN MEDICINE, 2018, 11121 : 76 - 84
  • [39] Attention based multi-task interpretable graph convolutional network for Alzheimer's disease analysis
    Jiang, Shunqin
    Feng, Qiyuan
    Li, Hengxin
    Deng, Zhenyun
    Jiang, Qinghong
    PATTERN RECOGNITION LETTERS, 2024, 180 : 1 - 8
  • [40] Predicting conversion of Alzheimer's disease based on multi-modal fusion of neuroimaging and genetic data
    Xi, Yang
    Wang, Qian
    Wu, Chenxue
    Zhang, Lu
    Chen, Ying
    Lan, Zhu
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (01)