Multi-modal deep learning from imaging genomic data for schizophrenia classification

被引:2
|
作者
Kanyal, Ayush [1 ]
Mazumder, Badhan [1 ]
Calhoun, Vince D. [2 ]
Preda, Adrian [3 ]
Turner, Jessica [4 ]
Ford, Judith [5 ]
Ye, Dong Hye [2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30303 USA
[3] Univ Calif Irvine, Dept Psychiat & Human Behav, Irvine, CA USA
[4] Ohio State Univ, Dept Psychiat & Behav Hlth, Columbus, OH USA
[5] Univ Calif San Francisco, Dept Psychiat, San Francisco, CA USA
来源
FRONTIERS IN PSYCHIATRY | 2024年 / 15卷
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
schizophrenia; multi-modal; imaging genetics; deep learning; explainable artificial intelligence (XAI); single nucleotide polymorphism (SNP); functional network connectivity (FNC); structural magnetic resonance imaging (sMRI); NETWORK; ABNORMALITIES; CONNECTIVITY; ASSOCIATION; BIPOLAR; MODELS; SNP;
D O I
10.3389/fpsyt.2024.1384842
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
引用
收藏
页数:10
相关论文
共 50 条
  • [11] Exploring Fusion Strategies in Deep Learning Models for Multi-Modal Classification
    Zhang, Duoyi
    Nayak, Richi
    Bashar, Md Abul
    DATA MINING, AUSDM 2021, 2021, 1504 : 102 - 117
  • [12] Deep Learning based Multi-modal Ultrasound-Photoacoustic Imaging
    Halder, Sumana
    Patidar, Sankalp
    Chaudhury, Koel
    Mandal, Subhamoy
    PROCEEDINGS OF THE 2024 IEEE SOUTH ASIAN ULTRASONICS SYMPOSIUM, SAUS 2024, 2024,
  • [13] A Unified Deep Learning Framework for Multi-Modal Multi-Dimensional Data
    Xi, Pengcheng
    Goubran, Rafik
    Shu, Chang
    2019 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA), 2019,
  • [14] Learning Concept Taxonomies from Multi-modal Data
    Zhang, Hao
    Hu, Zhiting
    Deng, Yuntian
    Sachani, Mrinmaya
    Yan, Zhicheng
    Xing, Eric P.
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 1791 - 1801
  • [15] Twitter Demographic Classification Using Deep Multi-modal Multi-task Learning
    Vijayaraghavan, Prashanth
    Vosoughi, Soroush
    Roy, Deb
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2, 2017, : 478 - 483
  • [16] Multi-Modal Low-Data-Based Learning for Video Classification
    Citak, Erol
    Karsligil, Mine Elif
    APPLIED SCIENCES-BASEL, 2024, 14 (10):
  • [17] Multi-modal Learning with Missing Data for Cancer Diagnosis Using Histopathological and Genomic Data
    Cui, Can
    Asad, Zuhayr
    Dean, William F.
    Smith, Isabelle T.
    Madden, Christopher
    Bao, Shunxing
    Landman, Bennett A.
    Roland, Joseph T.
    Coburn, Lori A.
    Wilson, Keith T.
    Zwerner, Jeffrey P.
    Zhao, Shilin
    Wheless, Lee E.
    Huo, Yuankai
    MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, 2022, 12033
  • [18] A Massive Multi-Modal Perception Data Classification Method Using Deep Learning Based on Internet of Things
    Linli Jiang
    Chunmei Wu
    International Journal of Wireless Information Networks, 2020, 27 : 226 - 233
  • [19] A Massive Multi-Modal Perception Data Classification Method Using Deep Learning Based on Internet of Things
    Jiang, Linli
    Wu, Chunmei
    INTERNATIONAL JOURNAL OF WIRELESS INFORMATION NETWORKS, 2020, 27 (02) : 226 - 233
  • [20] Multi-modal data clustering using deep learning: A systematic review
    Raya, Sura
    Orabi, Mariam
    Afyouni, Imad
    Al Aghbari, Zaher
    NEUROCOMPUTING, 2024, 607