Deep Multi-modal Latent Representation Learning for Automated Dementia Diagnosis

被引:19
|
作者
Zhou, Tao [1 ]
Liu, Mingxia [2 ,3 ]
Fu, Huazhu [1 ]
Wang, Jun [4 ]
Shen, Jianbing [1 ]
Shao, Ling [1 ]
Shen, Dinggang [2 ,3 ]
机构
[1] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Univ N Carolina, Dept Radiol, Chapel Hill, NC 27515 USA
[3] Univ N Carolina, BRIC, Chapel Hill, NC 27515 USA
[4] Shanghai Univ, Shanghai Inst Adv Commun & Data Sci, Sch Commun & Informat Engn, Shanghai, Peoples R China
关键词
MILD COGNITIVE IMPAIRMENT;
D O I
10.1007/978-3-030-32251-9_69
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Effective fusion of multi-modality neuroimaging data, such as structural magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), has attracted increasing interest in computer-aided brain disease diagnosis, by providing complementary structural and functional information of the brain to improve diagnostic performance. Although considerable progress has been made, there remain several significant challenges in traditional methods for fusing multi-modality data. First, the fusion of multi-modality data is usually independent of the training of diagnostic models, leading to suboptimal performance. Second, it is challenging to effectively exploit the complementary information among multiple modalities based on low-level imaging features (e.g., image intensity or tissue volume). To this end, in this paper, we propose a novel Deep Latent Multi-modality Dementia Diagnosis (DLMD2) framework based on a deep non-negative matrix factorization (NMF) model. Specifically, we integrate the feature fusion/learning process into the classifier construction step for eliminating the gap between neuroimaging features and disease labels. To exploit the correlations among multi-modality data, we learn latent representations for multi-modality data by sharing the common high-level representations in the last layer of each modality in the deep NMF model. Extensive experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate that our proposed method outperforms several state-of-the-art methods.
引用
收藏
页码:629 / 638
页数:10
相关论文
共 50 条
  • [41] Multi-modal Representation Learning for Social Post Location Inference
    Dai, RuiTing
    Luo, Jiayi
    Luo, Xucheng
    Mo, Lisi
    Ma, Wanlun
    Zhou, Fan
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6331 - 6336
  • [42] Tripartite interaction representation learning for multi-modal sentiment analysis
    Wang, Binqiang
    Dong, Gang
    Zhao, Yaqian
    Li, Rengang
    Yin, Wenfeng
    Lu, Lihua
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 268
  • [43] MULTI-MODAL FUSION LEARNING FOR CERVICAL DYSPLASIA DIAGNOSIS
    Chen, Tingting
    Ma, Xinjun
    Ying, Xingde
    Wang, Wenzhe
    Yuan, Chunnv
    Lu, Weiguo
    Chen, Danny Z.
    Wu, Jian
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 1505 - 1509
  • [44] A Theoretical Analysis of Multi-Modal Representation Learning with Regular Functions
    Vural, Elif
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [45] Multi-Modal Transportation Recommendation with Unified Route Representation Learning
    Liu, Hao
    Han, Jindong
    Fu, Yanjie
    Zhou, Jingbo
    Lu, Xinjiang
    Xiong, Hui
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2020, 14 (03): : 342 - 350
  • [46] Graph Embedding Contrastive Multi-Modal Representation Learning for Clustering
    Xia, Wei
    Wang, Tianxiu
    Gao, Quanxue
    Yang, Ming
    Gao, Xinbo
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1170 - 1183
  • [47] Relation-Induced Multi-Modal Shared Representation Learning for Alzheimer's Disease Diagnosis
    Ning, Zhenyuan
    Xiao, Qing
    Feng, Qianjin
    Chen, Wufan
    Zhang, Yu
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (06) : 1632 - 1645
  • [48] MULTI-MODAL REPRESENTATION LEARNING FOR SHORT VIDEO UNDERSTANDING AND RECOMMENDATION
    Guo, Daya
    Hong, Jiangshui
    Luo, Binli
    Yan, Qirui
    Niu, Zhangming
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 687 - 690
  • [49] Towards a systematic multi-modal representation learning for network data
    Ben Houidi, Zied
    Azorin, Raphael
    Gallo, Massimo
    Finamore, Alessandro
    Rossi, Dario
    THE 21ST ACM WORKSHOP ON HOT TOPICS IN NETWORKS, HOTNETS 2022, 2022, : 181 - 187
  • [50] Multi-modal Representation Learning for Video Advertisement Content Structuring
    Guo, Daya
    Zeng, Zhaoyang
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4770 - 4774