Multi-level fusion network for mild cognitive impairment identification using multi-modal neuroimages

被引:2
|
作者
Xu, Haozhe [1 ,2 ,3 ]
Zhong, Shengzhou [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
机构
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou 510515, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 09期
基金
中国国家自然科学基金;
关键词
mild cognitive impairment; multi-modal neuroimages; convolutional neural network; multi-level fusion; DISEASE; MRI; DEMENTIA; CLASSIFICATION; REPRESENTATION; PROGRESSION; PREDICTION; CONVERSION; DIAGNOSIS;
D O I
10.1088/1361-6560/accac8
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification. Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification. Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain. Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.
引用
下载
收藏
页数:15
相关论文
共 50 条
  • [21] Multi-Modal Adaptive Fusion Transformer Network for the Estimation of Depression Level
    Sun, Hao
    Liu, Jiaqing
    Chai, Shurong
    Qiu, Zhaolin
    Lin, Lanfen
    Huang, Xinyin
    Chen, Yenwei
    SENSORS, 2021, 21 (14)
  • [22] Identification of Mild Cognitive Impairment Conversion Using Augmented Resting-State Functional Connectivity Under Multi-Modal Parcellation
    Sheng, Jinhua
    Huang, He
    Zhang, Qiao
    Li, Zhongjin
    Zhu, Haodi
    Wang, Jialei
    Ying, Ziyi
    Zeng, Jing
    IEEE ACCESS, 2024, 12 : 4255 - 4264
  • [23] Multi-modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment
    Li, Qing
    Wu, Xia
    Xu, Lele
    Chen, Kewei
    Yao, Li
    Li, Rui
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2017, 150 : 1 - 8
  • [24] Multi-modal Fusion
    Liu, Huaping
    Hussain, Amir
    Wang, Shuliang
    INFORMATION SCIENCES, 2018, 432 : 462 - 462
  • [25] Multi-level Deep Correlative Networks for Multi-modal Sentiment Analysis
    CAI Guoyong
    LYU Guangrui
    LIN Yuming
    WEN Yimin
    Chinese Journal of Electronics, 2020, 29 (06) : 1025 - 1038
  • [26] Identification of Amnestic Mild Cognitive Impairment Using Multi-Modal Brain Features: A Combined Structural MRI and Diffusion Tensor Imaging Study
    Xie, Yunyan
    Cui, Zaixu
    Zhang, Zhongmin
    Sun, Yu
    Sheng, Can
    Li, Kuncheng
    Gong, Gaolang
    Han, Ying
    Jia, Jianping
    JOURNAL OF ALZHEIMERS DISEASE, 2015, 47 (02) : 509 - 522
  • [27] Unsupervised domain adaptation multi-level adversarial network for semantic segmentation based on multi-modal features
    Wang Z.
    Bu S.
    Huang W.
    Zheng Y.
    Wu Q.
    Chang H.
    Zhang X.
    Tongxin Xuebao/Journal on Communications, 2022, 43 (12): : 157 - 171
  • [28] MMF-Track: Multi-Modal Multi-Level Fusion for 3D Single Object Tracking
    Li, Zhiheng
    Cui, Yubo
    Lin, Yu
    Fang, Zheng
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (01): : 1817 - 1829
  • [29] Self-supervised multi-modal fusion network for multi-modal thyroid ultrasound image diagnosis
    Xiang, Zhuo
    Zhuo, Qiuluan
    Zhao, Cheng
    Deng, Xiaofei
    Zhu, Ting
    Wang, Tianfu
    Jiang, Wei
    Lei, Baiying
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 150
  • [30] Multi-Level Cross-Modal Interactive-Network-Based Semi-Supervised Multi-Modal Ship Classification
    The School of Software Technology, Dalian University of Technology, Dalian
    116621, China
    Sensors, 2024, 22