Multi-level fusion network for mild cognitive impairment identification using multi-modal neuroimages

被引:2
|
作者
Xu, Haozhe [1 ,2 ,3 ]
Zhong, Shengzhou [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
机构
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou 510515, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 09期
基金
中国国家自然科学基金;
关键词
mild cognitive impairment; multi-modal neuroimages; convolutional neural network; multi-level fusion; DISEASE; MRI; DEMENTIA; CLASSIFICATION; REPRESENTATION; PROGRESSION; PREDICTION; CONVERSION; DIAGNOSIS;
D O I
10.1088/1361-6560/accac8
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification. Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification. Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain. Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] A multi-modal fusion YoLo network for traffic detection
    Zheng, Xinwang
    Zheng, Wenjie
    Xu, Chujie
    COMPUTATIONAL INTELLIGENCE, 2024, 40 (02)
  • [32] 3DMGNet: 3D Model Generation Network Based on Multi-Modal Data Constraints and Multi-Level Feature Fusion
    Wang, Ende
    Xue, Lei
    Li, Yong
    Zhang, Zhenxin
    Hou, Xukui
    SENSORS, 2020, 20 (17) : 1 - 16
  • [33] FUSION OF MULTI-MODAL NEUROIMAGING DATA AND ASSOCIATION WITH COGNITIVE DATA
    LoPresto, Mark D.
    Akhonda, M. A. B. S.
    Calhoun, Vince D.
    Adali, Tülay
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [34] Sex-Specific Heterogeneity of Mild Cognitive Impairment Identified Based on Multi-Modal Data Analysis
    Katabathula, Sreevani
    Davis, Pamela B.
    Xu, Rong
    JOURNAL OF ALZHEIMERS DISEASE, 2023, 91 (01) : 233 - 243
  • [35] Multi-level, multi-modal interactions for visual question answering over text in images
    Chen, Jincai
    Zhang, Sheng
    Zeng, Jiangfeng
    Zou, Fuhao
    Li, Yuan-Fang
    Liu, Tao
    Lu, Ping
    World Wide Web, 2022, 25 (04) : 1607 - 1623
  • [36] Multi-level perception fusion dehazing network
    Wu, Xiaohua
    Li, Zenglu
    Guo, Xiaoyu
    Xiang, Songyang
    Zhang, Yao
    PLOS ONE, 2023, 18 (10):
  • [37] Identifying Alzheimer's disease and mild cognitive impairment with atlas-based multi-modal metrics
    Long, Zhuqing
    Li, Jie
    Fan, Jianghua
    Li, Bo
    Du, Yukeng
    Qiu, Shuang
    Miao, Jichang
    Chen, Jian
    Yin, Juanwu
    Jing, Bin
    FRONTIERS IN AGING NEUROSCIENCE, 2023, 15
  • [38] Complex Multi-modal Multi-level Influence Networks - Affordable Housing Case Study
    Beautement, Patrick
    Broenner, Christine
    COMPLEX SCIENCES, PT 2, 2009, 5 : 2054 - 2063
  • [39] Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations
    Finzel, Bettina
    Tafler, David E.
    Scheele, Stephan
    Schmid, Ute
    ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2021, 2021, 12873 : 80 - 94
  • [40] Multi-level, multi-modal interactions for visual question answering over text in images
    Jincai Chen
    Sheng Zhang
    Jiangfeng Zeng
    Fuhao Zou
    Yuan-Fang Li
    Tao Liu
    Ping Lu
    World Wide Web, 2022, 25 : 1607 - 1623