Multi-modal deep learning from imaging genomic data for schizophrenia classification

被引:2
|
作者
Kanyal, Ayush [1 ]
Mazumder, Badhan [1 ]
Calhoun, Vince D. [2 ]
Preda, Adrian [3 ]
Turner, Jessica [4 ]
Ford, Judith [5 ]
Ye, Dong Hye [2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30303 USA
[3] Univ Calif Irvine, Dept Psychiat & Human Behav, Irvine, CA USA
[4] Ohio State Univ, Dept Psychiat & Behav Hlth, Columbus, OH USA
[5] Univ Calif San Francisco, Dept Psychiat, San Francisco, CA USA
来源
FRONTIERS IN PSYCHIATRY | 2024年 / 15卷
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
schizophrenia; multi-modal; imaging genetics; deep learning; explainable artificial intelligence (XAI); single nucleotide polymorphism (SNP); functional network connectivity (FNC); structural magnetic resonance imaging (sMRI); NETWORK; ABNORMALITIES; CONNECTIVITY; ASSOCIATION; BIPOLAR; MODELS; SNP;
D O I
10.3389/fpsyt.2024.1384842
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] MULTI-MODAL DEEP LEARNING ON IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION
    Kanyal, Ayush
    Kandula, Srinivas
    Calhoun, Vince
    Ye, Dong Hye
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [2] Multi-modal deep learning of functional and structural neuroimaging and genomic data to predict mental illness
    Rahaman, Md Abdur
    Chen, Jiayu
    Fu, Zening
    Lewis, Noah
    Iraji, Armin
    Calhoun, Vince D.
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 3267 - 3272
  • [3] RADIOGRAPHIC OSTEOARTHRITIS PROGRESSION PREDICTION VIA MULTI-MODAL IMAGING DATA AND DEEP LEARNING
    Panfilov, E.
    Tiulpin, A.
    Nieminen, M. T.
    Saarakkala, S.
    OSTEOARTHRITIS AND CARTILAGE, 2022, 30 : S86 - S87
  • [4] Deep Learning Based Multi-modal Registration for Retinal Imaging
    Arikan, Mustafa
    Sadeghipour, Amir
    Gerendas, Bianca
    Told, Reinhard
    Schmidt-Erfurt, Ursula
    INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 : 75 - 82
  • [5] Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach
    Rudovic, Ognjen
    Zhang, Meiru
    Schuller, Bjorn
    Picard, Rosalind W.
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 6 - 15
  • [6] Prediction of crime occurrence from multi-modal data using deep learning
    Kang, Hyeon-Woo
    Kang, Hang-Bong
    PLOS ONE, 2017, 12 (04):
  • [7] Detecting glaucoma from multi-modal data using probabilistic deep learning
    Huang, Xiaoqin
    Sun, Jian
    Gupta, Krati
    Montesano, Giovanni
    Crabb, David P.
    Garway-Heath, David F.
    Brusini, Paolo
    Lanzetta, Paolo
    Oddone, Francesco
    Turpin, Andrew
    McKendrick, Allison M.
    Johnson, Chris A.
    Yousefi, Siamak
    FRONTIERS IN MEDICINE, 2022, 9
  • [8] PHENOTYPING OF SCHIZOPHRENIA BY MULTI-MODAL BRAIN IMAGING
    Schal, Ulrich
    Rasser, Paul E. .
    Fulham, Ross
    Todd, Juanita
    Michie, Patricia T.
    Ward, Philip B.
    Johnston, Patrick
    Thompson, Paul M.
    SCHIZOPHRENIA RESEARCH, 2010, 117 (2-3) : 480 - 481
  • [9] PHENOTYPING OF SCHIZOPHRENIA BY MULTI-MODAL BRAIN IMAGING
    Schall, Ulrich
    Rasser, Paul E.
    Fulham, Ross
    Todd, Juanita
    Johnston, Patrick J.
    Ward, Philip B.
    Thompson, Paul M.
    Michie, Patricia T.
    AUSTRALIAN AND NEW ZEALAND JOURNAL OF PSYCHIATRY, 2010, 44 : A37 - A37
  • [10] Multi-modal fusion deep learning model for excavated soil heterogeneous data with efficient classification
    Guo, Qi-Meng
    Zhan, Liang-Tong
    Yin, Zhen-Yu
    Feng, Hang
    Yang, Guang-Qian
    Chen, Yun-Min
    COMPUTERS AND GEOTECHNICS, 2024, 175