MULTI-MODAL DEEP LEARNING ON IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION

被引:2
|
作者
Kanyal, Ayush [1 ]
Kandula, Srinivas [1 ]
Calhoun, Vince [1 ,2 ]
Ye, Dong Hye [1 ,2 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
[2] Triinst Ctr Translat Res Neuroimaging & Data Sci, Atlanta, GA USA
关键词
Schizophrenia; Imaging Genomics; Multimodal; Deep Learning; Explainable AI; SNP;
D O I
10.1109/ICASSPW59220.2023.10193352
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Schizophrenia (SZ) is a severe, chronic mental condition that impacts one's capacity to think, act, and interact with others. It has been established that SZ patients have morphological changes in their brains, along with decreased hippocampal and thalamic volume. Also, it is known that patients with SZ have irregular functional brain connectivity. Furthermore, because SZ is a genetic illness, genetic markers such as single nucleotide polymorphisms (SNP) can be useful to characterize SZ patients. We propose an automatic method to detect changes in SZ patients' brains considering its heterogeneous multi-modal nature. We present a novel deep-learning method to classify SZ subjects with morphological features from structural MRI (sMRI), brain connectivity features from functional MRI (fMRI), and genetic features from SNPs. For sMRI, we used a pre-trained DenseNet to extract convolutional features which encode the morphological changes induced by SZ. For fMRI, we choose the important connections in functional network connection (FNC) matrix by applying layer-wise relevance propagation (LRP). We also detect SZ-linked SNPs using LRP on a pre-trained 1-dimensional convolutional neural network. Combined features from these three modalities are then fed to an extreme gradient boosting (XGBoost) tree classifier for SZ diagnosis. The experiments using the clinical dataset have shown that our multi-modal approach significantly improved SZ classification accuracy compared with uni-modal deep learning methods.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Multi-modal deep learning from imaging genomic data for schizophrenia classification
    Kanyal, Ayush
    Mazumder, Badhan
    Calhoun, Vince D.
    Preda, Adrian
    Turner, Jessica
    Ford, Judith
    Ye, Dong Hye
    [J]. FRONTIERS IN PSYCHIATRY, 2024, 15
  • [2] DEEP LEARNING FROM IMAGING GENETICS FOR SCHIZOPHRENIA CLASSIFICATION
    Yu, Hongkun
    Florian, Thomas
    Calhoun, Vince
    Ye, Dong Hye
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 3291 - 3295
  • [3] Deep Learning Based Multi-modal Registration for Retinal Imaging
    Arikan, Mustafa
    Sadeghipour, Amir
    Gerendas, Bianca
    Told, Reinhard
    Schmidt-Erfurt, Ursula
    [J]. INTERPRETABILITY OF MACHINE INTELLIGENCE IN MEDICAL IMAGE COMPUTING AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT, 2020, 11797 : 75 - 82
  • [4] PHENOTYPING OF SCHIZOPHRENIA BY MULTI-MODAL BRAIN IMAGING
    Schal, Ulrich
    Rasser, Paul E. .
    Fulham, Ross
    Todd, Juanita
    Michie, Patricia T.
    Ward, Philip B.
    Johnston, Patrick
    Thompson, Paul M.
    [J]. SCHIZOPHRENIA RESEARCH, 2010, 117 (2-3) : 480 - 481
  • [5] PHENOTYPING OF SCHIZOPHRENIA BY MULTI-MODAL BRAIN IMAGING
    Schall, Ulrich
    Rasser, Paul E.
    Fulham, Ross
    Todd, Juanita
    Johnston, Patrick J.
    Ward, Philip B.
    Thompson, Paul M.
    Michie, Patricia T.
    [J]. AUSTRALIAN AND NEW ZEALAND JOURNAL OF PSYCHIATRY, 2010, 44 : A37 - A37
  • [6] A Dirty Multi-task Learning Method for Multi-modal Brain Imaging Genetics
    Du, Lei
    Liu, Fang
    Liu, Kefei
    Yao, Xiaohui
    Risacher, Shannon L.
    Han, Junwei
    Guo, Lei
    Saykin, Andrew J.
    Shen, Li
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT IV, 2019, 11767 : 447 - 455
  • [7] Exploring Fusion Strategies in Deep Learning Models for Multi-Modal Classification
    Zhang, Duoyi
    Nayak, Richi
    Bashar, Md Abul
    [J]. DATA MINING, AUSDM 2021, 2021, 1504 : 102 - 117
  • [8] Deep Learning based Multi-modal Ultrasound-Photoacoustic Imaging
    Halder, Sumana
    Patidar, Sankalp
    Chaudhury, Koel
    Mandal, Subhamoy
    [J]. PROCEEDINGS OF THE 2024 IEEE SOUTH ASIAN ULTRASONICS SYMPOSIUM, SAUS 2024, 2024,
  • [9] Twitter Demographic Classification Using Deep Multi-modal Multi-task Learning
    Vijayaraghavan, Prashanth
    Vosoughi, Soroush
    Roy, Deb
    [J]. PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2, 2017, : 478 - 483
  • [10] Deep learning supported breast cancer classification with multi-modal image fusion
    Hamdy, Eman
    Zaghloul, Mohamed Saad
    Badawy, Osama
    [J]. 2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, : 319 - 325