ALIME: Autoencoder Based Approach for Local Interpretability

被引:46
|
作者
Shankaranarayana, Sharath M. [1 ]
Runje, Davor [1 ]
机构
[1] ZASTI AI, Chennai, Tamil Nadu, India
关键词
Interpretable machine learning; Deep learning; Autoencoder; Explainable AI (XAI); Healthcare;
D O I
10.1007/978-3-030-33607-3_49
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning and especially deep learning have garnered tremendous popularity in recent years due to their increased performance over other methods. The availability of large amount of data has aided in the progress of deep learning. Nevertheless, deep learning models are opaque and often seen as black boxes. Thus, there is an inherent need to make the models interpretable, especially so in the medical domain. In this work, we propose a locally interpretable method, which is inspired by one of the recent tools that has gained a lot of interest, called local interpretable model-agnostic explanations (LIME). LIME generates single instance level explanation by artificially generating a dataset around the instance (by randomly sampling and using perturbations) and then training a local linear interpretable model. One of the major issues in LIME is the instability in the generated explanation, which is caused due to the randomly generated dataset. Another issue in these kind of local interpretable models is the local fidelity. We propose novel modifications to LIME by employing an autoencoder, which serves as a better weighting function for the local model. We perform extensive comparisons with different datasets and show that our proposed method results in both improved stability, as well as local fidelity.
引用
收藏
页码:454 / 463
页数:10
相关论文
共 50 条
  • [31] AN INTERPRETABILITY APPROACH FOR MORTALITY RISK PREDICTION BASED ON W-BDA AND MLP
    Zhang, Guanghua
    Zhang, Huimin
    Fang, Mingxing
    Zhang, Qi
    Ding, Renshuang
    [J]. UNIVERSITY POLITEHNICA OF BUCHAREST SCIENTIFIC BULLETIN SERIES C-ELECTRICAL ENGINEERING AND COMPUTER SCIENCE, 2023, 85 (01): : 245 - 260
  • [32] AN INTERPRETABILITY APPROACH FOR MORTALITY RISK PREDICTION BASED ON W-BDA AND MLP
    Zhang, Guanghua
    Zhang, Huimin
    Fang, Mingxing
    Zhang, Qi
    Ding, Renshuang
    [J]. UPB Scientific Bulletin, Series C: Electrical Engineering and Computer Science, 2023, 85 (01): : 245 - 260
  • [33] LOCAL REPRESENTATION LEARNING WITH A CONVOLUTIONAL AUTOENCODER
    Kenning, Michael P.
    Xie, Xianghua
    Edwards, Michael
    Deng, Jingjing
    [J]. 2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 3239 - 3243
  • [34] Multi-local Collaborative AutoEncoder
    Chu, Jielei
    Wang, Hongjun
    Liu, Jing
    Gong, Zhiguo
    Li, Tianrui
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 239
  • [35] Looking for a real-world-semantics-based approach to the interpretability of fuzzy systems
    Cat Ho Nguyen
    Alonso, Jose M.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ-IEEE), 2017,
  • [36] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xu Xiang
    Hong Yu
    Ye Wang
    Guoyin Wang
    [J]. Applied Intelligence, 2023, 53 : 28226 - 28240
  • [37] Variational autoencoder based bipartite network embedding by integrating local and global structure
    Jiao, Pengfei
    Tang, Minghu
    Liu, Hongtao
    Wang, Yaping
    Lu, Chunyu
    Wu, Huaming
    [J]. INFORMATION SCIENCES, 2020, 519 (519) : 9 - 21
  • [38] Stable local interpretable model-agnostic explanations based on a variational autoencoder
    Xiang, Xu
    Yu, Hong
    Wang, Ye
    Wang, Guoyin
    [J]. APPLIED INTELLIGENCE, 2023, 53 (23) : 28226 - 28240
  • [39] A fuzzy clustering algorithm enhancing local model interpretability
    Diez, J. L.
    Navarro, J. L.
    Sala, A.
    [J]. SOFT COMPUTING, 2007, 11 (10) : 973 - 983
  • [40] Saxformer: A Time Series Forecasting Framework with Local Interpretability
    Song, Ying
    Li, Danjing
    Sun, Kai
    Zheng, Yin
    [J]. PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON POWER ELECTRONICS AND ARTIFICIAL INTELLIGENCE, PEAI 2024, 2024, : 601 - 605