A Novel Local Ablation Approach for Explaining Multimodal Classifiers

被引:1
|
作者
Ellis, Charles A. [1 ,2 ]
Zhang, Rongen [3 ]
Calhoun, Vince D. [1 ,2 ]
Carbajal, Darwin A. [4 ]
Sendi, Mohammad S. E. [1 ,2 ]
Wang, May D. [2 ,4 ]
Miller, Robyn L. [1 ,2 ]
机构
[1] Georgia State Univ, Georgia Inst Technol, Triinst Ctr Translat Res Neuroimaging & Data Sci, Atlanta, GA 30303 USA
[2] Emory Univ, Atlanta, GA 30322 USA
[3] Georgia State Univ, Dept Comp Informat Syst, Atlanta, GA USA
[4] Georgia Inst Technol, Wallace H Coulter Dept Biomed Engn, Atlanta, GA USA
关键词
Multimodal Classification; Sleep Scoring; Local Explainability; Deep Learning;
D O I
10.1109/BIBE52308.2021.9635541
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
With the growing use of multimodal data for deep learning classification in healthcare research, more studies are presenting explainability methods for insight into multimodal classifiers. Among these studies, few utilize local explainability methods, which can provide (1) insight into the classification of samples over time and (2) better understanding of the effects of demographic and clinical variables upon patterns learned by classifiers. To the best of our knowledge, we present the first local explainability approach for insight into the importance of each modality to the classification of samples over time. Our approach uses ablation, and we demonstrate how it can show the importance of each modality to the correct classification of each class. We further present a novel analysis that explores the effects of demographic and clinical variables upon the multimodal patterns learned by the classifier. As a use-case, we train a convolutional neural network for automated sleep staging with electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) data. We find that EEG is the most important modality across most stages, though EOG is particularly important for non-rapid eye movement stage 1. Further, we identify significant relationships between the local explanations and subject age, sex, and state of medication which suggest that the classifier learned features associated with these variables across multiple modalities and correctly classified samples. Our novel explainability approach has implications for many fields involving multimodal classification. Moreover, our examination of the degree to which demographic and clinical variables may affect classifiers could provide direction for future studies in automated biomarker discovery.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] A Gradient-based Approach for Explaining Multimodal Deep Learning Classifiers
    Ellis, Charles A.
    Zhang, Rongen
    Calhoun, Vince D.
    Carbajal, Darwin A.
    Miller, Robyn L.
    Wang, May D.
    2021 IEEE 21ST INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (IEEE BIBE 2021), 2021,
  • [2] A Combinatorial Approach to Explaining Image Classifiers
    Chandrasekaran, Jaganmohan
    Lei, Yu
    Kacker, Raghu
    Kuhn, D. Richard
    2021 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2021), 2021, : 35 - 43
  • [3] A Symbolic Approach to Explaining Bayesian Network Classifiers
    Shih, Andy
    Choi, Arthur
    Darwiche, Adnan
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 5103 - 5111
  • [4] Novel methods for elucidating modality importance in multimodal electrophysiology classifiers
    Ellis, Charles A.
    Sendi, Mohammad S. E.
    Zhang, Rongen
    Carbajal, Darwin A.
    Wang, May D.
    Miller, Robyn L.
    Calhoun, Vince D.
    FRONTIERS IN NEUROINFORMATICS, 2023, 17
  • [5] Explaining contributions of features towards unfairness in classifiers: A novel threshold-dependent Shapley value-based approach
    Pelegrina, Guilherme Dean
    Siraj, Sajid
    Duarte, Leonardo Tomazeli
    Grabisch, Michel
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [6] Explaining classifiers by constructing familiar concepts
    Schneider, Johannes
    Vlachos, Michalis
    MACHINE LEARNING, 2023, 112 (11) : 4167 - 4200
  • [7] Explaining classifiers by constructing familiar concepts
    Johannes Schneider
    Michalis Vlachos
    Machine Learning, 2023, 112 : 4167 - 4200
  • [8] Explaining classifiers with measures of statistical association
    Borgonovo, Emanuele
    Ghidini, Valentina
    Hahn, Roman
    Plischke, Elmar
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2023, 182
  • [9] Explaining Image Classifiers with Visual Debates
    Kori, Avinash
    Glocker, Ben
    Toni, Francesca
    DISCOVERY SCIENCE, DS 2024, PT II, 2025, 15244 : 200 - 214
  • [10] Sparse Robust Regression for Explaining Classifiers
    Bjorklund, Anton
    Henelius, Andreas
    Oikarinen, Emilia
    Kallonen, Kimmo
    Puolamaki, Kai
    DISCOVERY SCIENCE (DS 2019), 2019, 11828 : 351 - 366