Multi-modal deep learning from imaging genomic data for schizophrenia classification

被引:2
|
作者
Kanyal, Ayush [1 ]
Mazumder, Badhan [1 ]
Calhoun, Vince D. [2 ]
Preda, Adrian [3 ]
Turner, Jessica [4 ]
Ford, Judith [5 ]
Ye, Dong Hye [2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30303 USA
[3] Univ Calif Irvine, Dept Psychiat & Human Behav, Irvine, CA USA
[4] Ohio State Univ, Dept Psychiat & Behav Hlth, Columbus, OH USA
[5] Univ Calif San Francisco, Dept Psychiat, San Francisco, CA USA
来源
FRONTIERS IN PSYCHIATRY | 2024年 / 15卷
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
schizophrenia; multi-modal; imaging genetics; deep learning; explainable artificial intelligence (XAI); single nucleotide polymorphism (SNP); functional network connectivity (FNC); structural magnetic resonance imaging (sMRI); NETWORK; ABNORMALITIES; CONNECTIVITY; ASSOCIATION; BIPOLAR; MODELS; SNP;
D O I
10.3389/fpsyt.2024.1384842
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
引用
收藏
页数:10
相关论文
共 50 条
  • [41] Turbo your multi-modal classification with contrastive learning
    Zhang, Zhiyu
    Liu, Da
    Liu, Shengqiang
    Wang, Anna
    Gao, Jie
    Li, Yali
    INTERSPEECH 2023, 2023, : 1848 - 1852
  • [42] Split Learning of Multi-Modal Medical Image Classification
    Ghosh, Bishwamittra
    Wang, Yuan
    Fu, Huazhu
    Wei, Qingsong
    Liu, Yong
    Goh, Rick Siow Mong
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1326 - 1331
  • [43] A multi-modal imaging study of the dorsal anterior cingulate in schizophrenia
    Yücel, M
    Harrison, BJ
    Wood, SJ
    Clarke, K
    Fornito, A
    Wellard, RM
    Seal, M
    Soulsby, B
    Kyrios, M
    Phillips, ML
    Velakoulis, D
    Pantelis, C
    SCHIZOPHRENIA RESEARCH, 2006, 81 : 153 - 153
  • [44] Multi-Modal Song Mood Detection with Deep Learning
    Pyrovolakis, Konstantinos
    Tzouveli, Paraskevi
    Stamou, Giorgos
    SENSORS, 2022, 22 (03)
  • [45] A Multi-Modal Deep Learning Approach for Emotion Recognition
    Shahzad, H. M.
    Bhatti, Sohail Masood
    Jaffar, Arfan
    Rashid, Muhammad
    INTELLIGENT AUTOMATION AND SOFT COMPUTING, 2023, 36 (02): : 1561 - 1570
  • [46] Memory based fusion for multi-modal deep learning
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    INFORMATION FUSION, 2021, 67 : 136 - 146
  • [47] Deep contrastive representation learning for multi-modal clustering
    Lu, Yang
    Li, Qin
    Zhang, Xiangdong
    Gao, Quanxue
    NEUROCOMPUTING, 2024, 581
  • [48] A deep manifold-regularized learning model for improving phenotype prediction from multi-modal data
    Nam D. Nguyen
    Jiawei Huang
    Daifeng Wang
    Nature Computational Science, 2022, 2 : 38 - 46
  • [49] A deep manifold-regularized learning model for improving phenotype prediction from multi-modal data
    Nguyen, Nam D.
    Huang, Jiawei
    Wang, Daifeng
    NATURE COMPUTATIONAL SCIENCE, 2022, 2 (01): : 38 - 46
  • [50] Hierarchical sparse representation with deep dictionary for multi-modal classification
    Wang, Zhengxia
    Teng, Shenghua
    Liu, Guodong
    Zhao, Zengshun
    Wu, Hongli
    NEUROCOMPUTING, 2017, 253 : 65 - 69