Model-Agnostic Explanations using Minimal Forcing Subsets

被引:0
|
作者
Han, Xing [1 ]
Ghosh, Joydeep [1 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
关键词
D O I
10.1109/IJCNN52387.2021.9533992
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
How can we find a subset of training samples that are most responsible for a specific prediction made by a complex black-box machine learning model? More generally, how can we explain the model's decisions to end-users in a transparent way? We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision at a particular test point, i.e., the model's decision would have changed upon the removal of this subset from the training dataset. Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem. Further, we speed up the algorithm through efficient approximations and provide theoretical justification for its performance. To demonstrate the applicability and effectiveness of our approach, we apply it to a variety of tasks including data poisoning detection, training set debugging and understanding loan decisions. The results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior, and therefore facilitates the adoption of machine learning in domains where such understanding is a requisite.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] Explainability of Point Cloud Neural Networks Using SMILE: Statistical Model-Agnostic Interpretability with Local Explanations
    Ahmadi, Seyed Mohammad
    Aslansefat, Koorosh
    Valcarce-Diñeiro, Rubén
    Barnfather, Joshua
    arXiv,
  • [32] Explaining Black Boxes With a SMILE: Statistical Model-Agnostic Interpretability With Local Explanations
    Aslansefat, Koorosh
    Hashemian, Mojgan
    Walker, Martin
    Akram, Mohammed Naveed
    Sorokos, Ioannis
    Papadopoulos, Yiannis
    IEEE SOFTWARE, 2024, 41 (01) : 87 - 97
  • [33] Generating structural alerts from toxicology datasets using the local interpretable model-agnostic explanations method
    Nascimento, Cayque Monteiro Castro
    Moura, Paloma Guimaraes
    Pimentel, Andre Silva
    DIGITAL DISCOVERY, 2023, 2 (05): : 1311 - 1325
  • [34] Model-Agnostic Federated Learning
    Mittone, Gianluca
    Riviera, Walter
    Colonnelli, Iacopo
    Birke, Robert
    Aldinucci, Marco
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 383 - 396
  • [35] Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing
    Chen, Zhouyuan
    Lian, Zhichao
    Xu, Zhe
    AXIOMS, 2023, 12 (10)
  • [36] Model-Agnostic Private Learning
    Bassily, Raef
    Thakkar, Om
    Thakurta, Abhradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [37] Improving Object Recognition in Crime Scenes via Local Interpretable Model-Agnostic Explanations
    Farhood, Helia
    Saberi, Morteza
    Najafi, Mohammad
    2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021), 2021, : 90 - 94
  • [38] Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Kumarakulasinghe, Nesaretnam Barr
    Blomberg, Tobias
    Lin, Jintai
    Leao, Alexandra Saraiva
    Papapetrou, Panagiotis
    2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), 2020, : 7 - 12
  • [39] Explain the Explainer: Interpreting Model-Agnostic Counterfactual Explanations of a Deep Reinforcement Learning Agent
    Chen Z.
    Silvestri F.
    Tolomei G.
    Wang J.
    Zhu H.
    Ahn H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1443 - 1457
  • [40] Model-Agnostic Pricing of Exotic Derivatives Using Signatures
    Alden, Andrew
    Ventre, Carmine
    Horvath, Blanka
    Lee, Gordon
    3RD ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, ICAIF 2022, 2022, : 96 - 104