A Novel Local Ablation Approach for Explaining Multimodal Classifiers

被引:1
|
作者
Ellis, Charles A. [1 ,2 ]
Zhang, Rongen [3 ]
Calhoun, Vince D. [1 ,2 ]
Carbajal, Darwin A. [4 ]
Sendi, Mohammad S. E. [1 ,2 ]
Wang, May D. [2 ,4 ]
Miller, Robyn L. [1 ,2 ]
机构
[1] Georgia State Univ, Georgia Inst Technol, Triinst Ctr Translat Res Neuroimaging & Data Sci, Atlanta, GA 30303 USA
[2] Emory Univ, Atlanta, GA 30322 USA
[3] Georgia State Univ, Dept Comp Informat Syst, Atlanta, GA USA
[4] Georgia Inst Technol, Wallace H Coulter Dept Biomed Engn, Atlanta, GA USA
关键词
Multimodal Classification; Sleep Scoring; Local Explainability; Deep Learning;
D O I
10.1109/BIBE52308.2021.9635541
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
With the growing use of multimodal data for deep learning classification in healthcare research, more studies are presenting explainability methods for insight into multimodal classifiers. Among these studies, few utilize local explainability methods, which can provide (1) insight into the classification of samples over time and (2) better understanding of the effects of demographic and clinical variables upon patterns learned by classifiers. To the best of our knowledge, we present the first local explainability approach for insight into the importance of each modality to the classification of samples over time. Our approach uses ablation, and we demonstrate how it can show the importance of each modality to the correct classification of each class. We further present a novel analysis that explores the effects of demographic and clinical variables upon the multimodal patterns learned by the classifier. As a use-case, we train a convolutional neural network for automated sleep staging with electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) data. We find that EEG is the most important modality across most stages, though EOG is particularly important for non-rapid eye movement stage 1. Further, we identify significant relationships between the local explanations and subject age, sex, and state of medication which suggest that the classifier learned features associated with these variables across multiple modalities and correctly classified samples. Our novel explainability approach has implications for many fields involving multimodal classification. Moreover, our examination of the degree to which demographic and clinical variables may affect classifiers could provide direction for future studies in automated biomarker discovery.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] A Novel Approach to Construct Discrete Support Vector Machine Classifiers
    Caserta, Marco
    Lessmann, Stefan
    Voss, Stefan
    ADVANCES IN DATA ANALYSIS, DATA HANDLING AND BUSINESS INTELLIGENCE, 2010, : 115 - 125
  • [22] Explaining Multimodal Errors in Autonomous Vehicles
    Gilpin, Leilani H.
    Penubarthi, Vishnu
    Kagal, Lalana
    2021 IEEE 8TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2021,
  • [23] Novel Approach to Gentle AdaBoost Algorithm with Linear Weak Classifiers
    Burduk, Robert
    Bozejko, Wojciech
    Zacher, Szymon
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS (ACIIDS 2020), PT I, 2020, 12033 : 600 - 611
  • [24] Explaining Graph Classifiers by Unsupervised Node Relevance Attribution
    Fontanesi, Michele
    Micheli, Alessio
    Podda, Marco
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT II, XAI 2024, 2024, 2154 : 63 - 74
  • [25] Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball
    Elliott, Andrew
    Law, Stephen
    Russell, Chris
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 10688 - 10697
  • [26] Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers
    Wyner, Abraham J.
    Olson, Matthew
    Bleich, Justin
    Mease, David
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18 : 1 - 33
  • [27] Explaining the success of adaboost and random forests as interpolating classifiers
    Wyner, Abraham J.
    Olson, Matthew
    Bleich, Justin
    Mease, David
    Journal of Machine Learning Research, 2017, 18 : 1 - 33
  • [28] Explaining black-box classifiers: Properties and functions
    Amgoud, Leila
    INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 2023, 155 : 40 - 65
  • [29] Explaining Image Classifiers with Multiscale Directional Image Representation
    Kolek, Stefan
    Windesheim, Robert
    Andrade-Loarca, Hector
    Kutyniok, Gitta
    Levie, Ron
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18600 - 18609
  • [30] Unsupervised Learning Algorithms for Multimodal Pattern Classifiers
    Matsunaga, Hiroyuki
    Urahama, Kiichi
    Systems and Computers in Japan, 1999, 30 (08): : 51 - 60