Multi-level fusion network for mild cognitive impairment identification using multi-modal neuroimages

被引:2
|
作者
Xu, Haozhe [1 ,2 ,3 ]
Zhong, Shengzhou [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
机构
[1] Southern Med Univ, Sch Biomed Engn, Guangzhou 510515, Peoples R China
[2] Southern Med Univ, Guangdong Prov Key Lab Med Image Proc, Guangzhou 510515, Peoples R China
[3] Southern Med Univ, Guangdong Prov Engn Lab Med Imaging & Diagnost Tec, Guangzhou 510515, Peoples R China
来源
PHYSICS IN MEDICINE AND BIOLOGY | 2023年 / 68卷 / 09期
基金
中国国家自然科学基金;
关键词
mild cognitive impairment; multi-modal neuroimages; convolutional neural network; multi-level fusion; DISEASE; MRI; DEMENTIA; CLASSIFICATION; REPRESENTATION; PROGRESSION; PREDICTION; CONVERSION; DIAGNOSIS;
D O I
10.1088/1361-6560/accac8
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Mild cognitive impairment (MCI) is a precursor to Alzheimer's disease (AD) which is an irreversible progressive neurodegenerative disease and its early diagnosis and intervention are of great significance. Recently, many deep learning methods have demonstrated the advantages of multi-modal neuroimages in MCI identification task. However, previous studies frequently simply concatenate patch-level features for prediction without modeling the dependencies among local features. Also, many methods only focus on modality-sharable information or modality-specific features and ignore their incorporation. This work aims to address above-mentioned issues and construct a model for accurate MCI identification. Approach. In this paper, we propose a multi-level fusion network for MCI identification using multi-modal neuroimages, which consists of local representation learning and dependency-aware global representation learning stages. Specifically, for each patient, we first extract multi-pair of patches from multiple same position in multi-modal neuroimages. After that, in the local representation learning stage, multiple dual-channel sub-networks, each of which consists of two modality-specific feature extraction branches and three sine-cosine fusion modules, are constructed to learn local features that preserve modality-sharable and modality specific representations simultaneously. In the dependency-aware global representation learning stage, we further capture long-range dependencies among local representations and integrate them into global ones for MCI identification. Main results. Experiments on ADNI-1/ADNI-2 datasets demonstrate the superior performance of the proposed method in MCI identification tasks (Accuracy: 0.802, sensitivity: 0.821, specificity: 0.767 in MCI diagnosis task; accuracy: 0.849, sensitivity: 0.841, specificity: 0.856 in MCI conversion task) when compared with state-of-the-art methods. The proposed classification model has demonstrated a promising potential to predict MCI conversion and identify the disease-related regions in the brain. Significance. We propose a multi-level fusion network for MCI identification using multi-modal neuroimage. The results on ADNI datasets have demonstrated its feasibility and superiority.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] A Multi-Modal and Multi-Atlas Integrated Framework for Identification of Mild Cognitive Impairment
    Long, Zhuqing
    Li, Jie
    Liao, Haitao
    Deng, Li
    Du, Yukeng
    Fan, Jianghua
    Li, Xiaofeng
    Miao, Jichang
    Qiu, Shuang
    Long, Chaojie
    Jing, Bin
    [J]. BRAIN SCIENCES, 2022, 12 (06)
  • [2] Multi-Modal fusion with multi-level attention for Visual Dialog
    Zhang, Jingping
    Wang, Qiang
    Han, Yahong
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2020, 57 (04)
  • [3] MLMFNet: A multi-level modality fusion network for multi-modal accelerated MRI reconstruction
    Zhou, Xiuyun
    Zhang, Zhenxi
    Du, Hongwei
    Qiu, Bensheng
    [J]. MAGNETIC RESONANCE IMAGING, 2024, 111 : 246 - 255
  • [4] Assessing clinical progression from subjective cognitive decline to mild cognitive impairment with incomplete multi-modal neuroimages
    Liu, Yunbi
    Yue, Ling
    Xiao, Shifu
    Yang, Wei
    Shen, Dinggang
    Liu, Mingxia
    [J]. MEDICAL IMAGE ANALYSIS, 2022, 75
  • [5] Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks
    Liu, Jin
    Tan, Guanxin
    Lan, Wei
    Wang, Jianxin
    [J]. BMC BIOINFORMATICS, 2020, 21 (Suppl 6)
  • [6] Identification of early mild cognitive impairment using multi-modal data and graph convolutional networks
    Jin Liu
    Guanxin Tan
    Wei Lan
    Jianxin Wang
    [J]. BMC Bioinformatics, 21
  • [7] Multi-level Interaction Network for Multi-Modal Rumor Detection
    Zou, Ting
    Qian, Zhong
    Li, Peifeng
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Multi-level and Multi-modal Target Detection Based on Feature Fusion
    Cheng T.
    Sun L.
    Hou D.
    Shi Q.
    Zhang J.
    Chen J.
    Huang H.
    [J]. Qiche Gongcheng/Automotive Engineering, 2021, 43 (11): : 1602 - 1610
  • [9] MBIAN: Multi-level bilateral interactive attention network for multi-modal
    Sun, Kai
    Zhang, Jiangshe
    Wang, Jialin
    Xu, Shuang
    Zhang, Chunxia
    Hu, Junying
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 231
  • [10] Multi-level Fusion of Multi-modal Semantic Embeddings for Zero Shot Learning
    Kong, Zhe
    Wang, Xin
    Gao, Neng
    Zhang, Yifei
    Liu, Yuhan
    Tu, Chenyang
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 310 - 318