Multi-modal deep learning from imaging genomic data for schizophrenia classification

被引:2
|
作者
Kanyal, Ayush [1 ]
Mazumder, Badhan [1 ]
Calhoun, Vince D. [2 ]
Preda, Adrian [3 ]
Turner, Jessica [4 ]
Ford, Judith [5 ]
Ye, Dong Hye [2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30303 USA
[3] Univ Calif Irvine, Dept Psychiat & Human Behav, Irvine, CA USA
[4] Ohio State Univ, Dept Psychiat & Behav Hlth, Columbus, OH USA
[5] Univ Calif San Francisco, Dept Psychiat, San Francisco, CA USA
来源
FRONTIERS IN PSYCHIATRY | 2024年 / 15卷
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
schizophrenia; multi-modal; imaging genetics; deep learning; explainable artificial intelligence (XAI); single nucleotide polymorphism (SNP); functional network connectivity (FNC); structural magnetic resonance imaging (sMRI); NETWORK; ABNORMALITIES; CONNECTIVITY; ASSOCIATION; BIPOLAR; MODELS; SNP;
D O I
10.3389/fpsyt.2024.1384842
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325
  • [22] Multi-modal Learning for Social Image Classification
    Liu, Chunyang
    Zhang, Xu
    Li, Xiong
    Li, Rui
    Zhang, Xiaoming
    Chao, Wenhan
    2016 12TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2016, : 1174 - 1179
  • [23] COVID-19 Hierarchical Classification Using a Deep Learning Multi-Modal
    Althenayan, Albatoul S.
    Alsalamah, Shada A.
    Aly, Sherin
    Nouh, Thamer
    Mahboub, Bassam
    Salameh, Laila
    Alkubeyyer, Metab
    Mirza, Abdulrahman
    SENSORS, 2024, 24 (08)
  • [24] A Hybrid Deep Learning Approach for Multi-Class Cyberbullying Classification Using Multi-Modal Social Media Data
    Tabassum, Israt
    Nunavath, Vimala
    APPLIED SCIENCES-BASEL, 2024, 14 (24):
  • [25] Multi-modal deep learning for landform recognition
    Du, Lin
    You, Xiong
    Li, Ke
    Meng, Liqiu
    Cheng, Gong
    Xiong, Liyang
    Wang, Guangxia
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2019, 158 : 63 - 75
  • [26] Deep Multi-modal Learning with Cascade Consensus
    Yang, Yang
    Wu, Yi-Feng
    Zhan, De-Chuan
    Jiang, Yuan
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 64 - 72
  • [27] Multi-modal deep distance metric learning
    Roostaiyan, Seyed Mahdi
    Imani, Ehsan
    Baghshah, Mahdieh Soleymani
    INTELLIGENT DATA ANALYSIS, 2017, 21 (06) : 1351 - 1369
  • [28] Multi-modal deep learning methods for classification of chest diseases using different medical imaging and cough sounds
    Malik, Hassaan
    Anees, Tayyaba
    PLOS ONE, 2024, 19 (03):
  • [29] Deep Object Tracking with Multi-modal Data
    Zhang, Xuezhi
    Yuan, Yuan
    Lu, Xiaoqiang
    2016 INTERNATIONAL CONFERENCE ON COMPUTER, INFORMATION AND TELECOMMUNICATION SYSTEMS (CITS), 2016, : 161 - 165
  • [30] MULTI-MODAL DATA FUSION SCHEMES FOR INTEGRATED CLASSIFICATION OF IMAGING AND NON-IMAGING BIOMEDICAL DATA
    Tiwari, Pallavi
    Viswanath, Satish
    Lee, George
    Madabhushi, Anant
    2011 8TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, 2011, : 165 - 168