"Why Should I Trust You?" Explaining the Predictions of Any Classifier

被引:7892
|
作者
Ribeiro, Marco Tulio [1 ]
Singh, Sameer [1 ]
Guestrin, Carlos [1 ]
机构
[1] Univ Washington, Seattle, WA 98105 USA
关键词
D O I
10.1145/2939672.2939778
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.
引用
收藏
页码:1135 / 1144
页数:10
相关论文
共 50 条
  • [1] Documenting Evidence of a Reuse of "'Why Should I Trust You?": Explaining the Predictions of Any Classifier'
    Peng, Kewen
    Menzies, Tim
    PROCEEDINGS OF THE 29TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE '21), 2021, : 1600 - 1600
  • [2] Why Should I Trust This Item? Explaining the Recommendations of any Model
    Lonjarret, Corentin
    Robardet, Celine
    Plantevit, Marc
    Auburtin, Roch
    Atzmueller, Martin
    2020 IEEE 7TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA 2020), 2020, : 526 - 535
  • [3] Trust me! (Or, why should I trust you, Bob?)
    Craven, Bob
    FORESTRY CHRONICLE, 2007, 83 (01): : 140 - 141
  • [4] Why should you trust answers from web?
    McGuinness, DL
    Proceedings of the 8th Joint Conference on Information Sciences, Vols 1-3, 2005, : 7 - 9
  • [5] GRADING Why You Should Trust Your Judgment
    Guskey, Thomas R.
    Jung, Lee Ann
    EDUCATIONAL LEADERSHIP, 2016, 73 (07) : 50 - 54
  • [6] Why Should I Trust Your Code?
    Delignat-Lavaud A.
    Fournet C.
    Vaswani K.
    Clebsch S.
    Riechert M.
    Costa M.
    Russinovich M.
    Queue, 2023, 21 (04): : 94 - 122
  • [7] Why Should I Trust Your Code?
    Delignat-Lavaud, Antoine
    Fournet, Cedric
    Vaswani, Kapil
    Clebsch, Sylvan
    Riechert, Maik
    Costa, Manuel
    Russinovich, Mark
    COMMUNICATIONS OF THE ACM, 2024, 67 (01) : 68 - 76
  • [8] Explaining Any Time Series Classifier
    Guidotti, Riccardo
    Monreale, Anna
    Spinnato, Francesco
    Pedreschi, Dino
    Giannotti, Fosca
    2020 IEEE SECOND INTERNATIONAL CONFERENCE ON COGNITIVE MACHINE INTELLIGENCE (COGMI 2020), 2020, : 167 - 176
  • [9] Why Do You Want to Know and Why Should I Trust You? Implicit Messaging in Cross-Cultural (Mis)Understandings
    Singer, Marjorie Kagawa
    Periyakoil, Vyjeyanthi
    Elk, Ronit
    JOURNAL OF PAIN AND SYMPTOM MANAGEMENT, 2018, 55 (02) : 583 - 583
  • [10] Why Should I Trust You?: Exploring Interpretability in Machine Learning Approaches for Indirect SHM
    Lan, Yifu
    Li, Zhenkun
    Lin, Weiwei
    e-Journal of Nondestructive Testing, 2024, 29 (07):