Multi-modal deep learning from imaging genomic data for schizophrenia classification

被引:2
|
作者
Kanyal, Ayush [1 ]
Mazumder, Badhan [1 ]
Calhoun, Vince D. [2 ]
Preda, Adrian [3 ]
Turner, Jessica [4 ]
Ford, Judith [5 ]
Ye, Dong Hye [2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci T, Atlanta, GA 30303 USA
[3] Univ Calif Irvine, Dept Psychiat & Human Behav, Irvine, CA USA
[4] Ohio State Univ, Dept Psychiat & Behav Hlth, Columbus, OH USA
[5] Univ Calif San Francisco, Dept Psychiat, San Francisco, CA USA
来源
FRONTIERS IN PSYCHIATRY | 2024年 / 15卷
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
schizophrenia; multi-modal; imaging genetics; deep learning; explainable artificial intelligence (XAI); single nucleotide polymorphism (SNP); functional network connectivity (FNC); structural magnetic resonance imaging (sMRI); NETWORK; ABNORMALITIES; CONNECTIVITY; ASSOCIATION; BIPOLAR; MODELS; SNP;
D O I
10.3389/fpsyt.2024.1384842
中图分类号
R749 [精神病学];
学科分类号
100205 ;
摘要
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises.Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC).Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%.Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Deep Multi-lnstance Learning Using Multi-Modal Data for Diagnosis for Lymphocytosis
    Sahasrabudhe, Mihir
    Sujobert, Pierre
    Zacharaki, Evangelia, I
    Maurin, Eugenie
    Grange, Beatrice
    Jallades, Laurent
    Paragios, Nikos
    Vakalopoulou, Maria
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2021, 25 (06) : 2125 - 2136
  • [32] DEEP LEARNING FROM IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION
    Yu, Hongkun
    Florian, Thomas
    Calhoun, Vince
    Ye, Dong Hye
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3291 - 3295
  • [33] Multi-modal topic modeling from social media data using deep transfer learning
    Rani, Seema
    Kumar, Mukesh
    APPLIED SOFT COMPUTING, 2024, 160
  • [34] Cardiovascular disease detection based on deep learning and multi-modal data fusion
    Zhu, Jiayuan
    Liu, Hui
    Liu, Xiaowei
    Chen, Chao
    Shu, Minglei
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 99
  • [35] Multi-Modal Physiological Data Fusion for Affect Estimation Using Deep Learning
    Hssayeni, Murtadha D.
    Ghoraani, Behnaz
    IEEE ACCESS, 2021, 9 : 21642 - 21652
  • [36] Multi-Modal Deep Learning for Vehicle Sensor Data Abstraction and Attack Detection
    Rofail, Mark
    Alsafty, Aysha
    Matousek, Matthias
    Kargl, Frank
    2019 IEEE INTERNATIONAL CONFERENCE OF VEHICULAR ELECTRONICS AND SAFETY (ICVES 19), 2019,
  • [37] Deep learning approaches for multi-modal sensor data analysis and abnormality detection
    Jadhav, Santosh Pandurang
    Srinivas, Angalkuditi
    Dipak Raghunath, Patil
    Ramkumar Prabhu, M.
    Suryawanshi, Jaya
    Haldorai, Anandakumar
    Measurement: Sensors, 33
  • [38] Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification
    Aspri, Maria
    Tsagkatakis, Grigorios
    Tsakalides, Panagiotis
    REMOTE SENSING, 2020, 12 (17)
  • [39] Combining Deep Learning with Signal-image Encoding for Multi-Modal MentalWellbeing Classification
    Woodward K.
    Kanjo E.
    Tsanas A.
    ACM Transactions on Computing for Healthcare, 2024, 5 (01):
  • [40] Learning to Hash on Partial Multi-Modal Data
    Wang, Qifan
    Si, Luo
    Shen, Bin
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3904 - 3910