Modeling Alzheimers' Disease Progression from Multi-task and Self-supervised Learning Perspective with Brain Networks

被引:6
|
作者
Liang, Wei [1 ,2 ]
Zhang, Kai [1 ,2 ]
Cao, Peng [1 ,2 ,3 ]
Zhao, Pengfei [4 ]
Liu, Xiaoli [5 ]
Yang, Jinzhu [1 ,2 ,3 ]
Zaiane, Osmar R. [6 ]
机构
[1] Northeastern Univ, Comp Sci & Engn, Shenyang, Peoples R China
[2] Northeastern Univ, Key Lab Intelligent Comp Med Image, Minist Educ, Shenyang, Peoples R China
[3] Natl Frontiers Sci Ctr Ind Intelligence & Syst Op, Shenyang 110819, Peoples R China
[4] Nanjing Med Univ, Affiliated Brain Hosp, Nanjing, Peoples R China
[5] DAMO Acad, Alibaba Grp, Hangzhou, Peoples R China
[6] Univ Alberta, Alberta Machine Intelligence Inst, Edmonton, AB, Canada
基金
中国国家自然科学基金;
关键词
Self-supervised learning; Multi-task learning; Cognitive scores; Brain networks; Longitudinal prediction; DIAGNOSIS;
D O I
10.1007/978-3-031-43907-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Alzheimer's disease (AD) is a common irreversible neurodegenerative disease among elderlies. Establishing relationships between brain networks and cognitive scores plays a vital role in identifying the progression of AD. However, most of the previous works focus on a single time point, without modeling the disease progression with longitudinal brain networks data. Besides, the longitudinal data is insufficient for sufficiently modeling the predictive models. To address these issues, we propose a Self-supervised Multi-Task learning Progression model SMP-Net for modeling the relationship between longitudinal brain networks and cognitive scores. Specifically, the proposed model is trained in a self-supervised way by designing a masked graph auto-encoder and a temporal contrastive learning that simultaneously learn the structural and evolutional features from the longitudinal brain networks. Furthermore, we propose a temporal multi-task learning paradigm to model the relationship among multiple cognitive scores prediction tasks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show the effectiveness of our method and achieve consistent improvements over state-of-the-art methods in terms of Mean Absolute Error (MAE), Pearson Correlation Coefficient (PCC) and Concordance Correlation Coefficient (CCC). Our code is available at https://github.com/IntelliDAL/Graph/tree/main/SMP-Net.
引用
收藏
页码:310 / 319
页数:10
相关论文
共 50 条
  • [21] MULTI-TASK SELF-SUPERVISED VISUAL REPRESENTATION LEARNING FOR MONOCULAR ROAD SEGMENTATION
    Cho, Jaehoon
    Kim, Youngjung
    Jung, Hyungjoo
    Oh, Changjae
    Youn, Jaesung
    Sohn, Kwanghoon
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [22] scPretrain: multi-task self-supervised learning for cell-type classification
    Zhang, Ruiyi
    Luo, Yunan
    Ma, Jianzhu
    Zhang, Ming
    Wang, Sheng
    BIOINFORMATICS, 2022, 38 (06) : 1607 - 1614
  • [23] Multi-task self-supervised learning based fusion representation for Multi-view clustering
    Guo, Tianlong
    Shen, Derong
    Kou, Yue
    Nie, Tiezheng
    INFORMATION SCIENCES, 2025, 694
  • [24] Multi-task Self-supervised Few-Shot Detection
    Zhang, Guangyong
    Duan, Lijuan
    Wang, Wenjian
    Gong, Zhi
    Ma, Bian
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 107 - 119
  • [25] Multi-Task Collaborative Network: Bridge the Supervised and Self-Supervised Learning for EEG Classification in RSVP Tasks
    Li, Hongxin
    Tang, Jingsheng
    Li, Wenqi
    Dai, Wei
    Liu, Yaru
    Zhou, Zongtan
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2024, 32 : 638 - 651
  • [26] TacoPrompt: A Collaborative Multi-Task Prompt Learning Method for Self-Supervised Taxonomy Completion
    Xu, Hongyuan
    Liu, Ciyi
    Niu, Yuhang
    Chen, Yunong
    Cai, Xiangrui
    Wen, Yanlong
    Yuan, Xiaojie
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 15804 - 15817
  • [27] MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations
    Heggan, Calum
    Hospedales, Tim
    Budgett, Sam
    Yaghoobi, Mehrdad
    INTERSPEECH 2023, 2023, : 4399 - 4403
  • [28] Modeling Disease Progression in Retinal OCTs with Longitudinal Self-supervised Learning
    Rivail, Antoine
    Schmidt-Erfurth, Ursula
    Vogl, Wolf-Dieter
    Waldstein, Sebastian M.
    Riedl, Sophie
    Grechenig, Christoph
    Wu, Zhichao
    Bogunovic, Hrvoje
    PREDICTIVE INTELLIGENCE IN MEDICINE (PRIME 2019), 2019, 11843 : 44 - 52
  • [29] Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
    Yu, Wenmeng
    Xu, Hua
    Yuan, Ziqi
    Wu, Jiele
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10790 - 10797
  • [30] Self-Supervised Multi-Task Pretraining Improves Image Aesthetic Assessment
    Pfister, Jan
    Kobs, Konstantin
    Hotho, Andreas
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 816 - 825