Semantic Reasoning from Model-Agnostic Explanations

被引:3
|
作者
Perdih, Timen Stepisnik [1 ]
Lavrac, Nada [1 ,2 ]
Skrlj, Blaz [3 ]
机构
[1] Jozef Stefan Inst, Ljubljana, Slovenia
[2] Univ Nova Gorica, Nova Gorica, Slovenia
[3] Jozef Stefan Inst, Jozef Stefan Int Postgrad Sch, Ljubljana, Slovenia
关键词
model explanations; reasoning; generalization; SHAP; machine learning; explainable AI;
D O I
10.1109/SAMI50585.2021.9378668
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the wide adoption of black-box models, instance-based post hoc explanation tools, such as LIME and SHAP became increasingly popular. These tools produce explanations, pinpointing contributions of key features associated with a given prediction. However, the obtained explanations remain at the raw feature level and are not necessarily understandable by a human expert without extensive domain knowledge. We propose ReEx (Reasoning with Explanations), a method applicable to explanations generated by arbitrary instance-level explainers, such as SHAP. By using background knowledge in the form of on-tologies, ReEx generalizes instance explanations in a least general generalization-like manner. The resulting symbolic descriptions are specific for individual classes and offer generalizations based on the explainer's output. The derived semantic explanations are potentially more informative, as they describe the key attributes in the context of more general background knowledge, e.g., at the biological process level. We showcase ReEx's performance on nine biological data sets, showing that compact, semantic explanations can be obtained and are more informative than generic ontology mappings that link terms directly to feature names. ReEx is offered as a simple-to-use Python library and is compatible with tools such as SHAP and similar. To our knowledge, this is one of the first methods that directly couples semantic reasoning with contemporary model explanation methods.
引用
收藏
页码:105 / 110
页数:6
相关论文
共 50 条
  • [31] TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT Security
    Zolanvari, Maede
    Yang, Zebo
    Khan, Khaled
    Jain, Raj
    Meskin, Nader
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (04) : 2967 - 2978
  • [32] Interpretable Human Activity Recognition With Temporal Convolutional Networks and Model-Agnostic Explanations
    Bijalwan, Vishwanath
    Khan, Abdul Manan
    Baek, Hangyeol
    Jeon, Sangmin
    Kim, Youngshik
    IEEE SENSORS JOURNAL, 2024, 24 (17) : 27607 - 27617
  • [33] Explaining Black Boxes With a SMILE: Statistical Model-Agnostic Interpretability With Local Explanations
    Aslansefat, Koorosh
    Hashemian, Mojgan
    Walker, Martin
    Akram, Mohammed Naveed
    Sorokos, Ioannis
    Papadopoulos, Yiannis
    IEEE SOFTWARE, 2024, 41 (01) : 87 - 97
  • [34] X3SEG: MODEL-AGNOSTIC EXPLANATIONS FOR THE SEMANTIC SEGMENTATION OF 3D POINT CLOUDS WITH PROTOTYPES AND CRITICISM
    Heide, Nina Felicitas
    Mueller, Erik
    Petereit, Janko
    Heizmann, Michael
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3687 - 3691
  • [35] Generating structural alerts from toxicology datasets using the local interpretable model-agnostic explanations method
    Nascimento, Cayque Monteiro Castro
    Moura, Paloma Guimaraes
    Pimentel, Andre Silva
    DIGITAL DISCOVERY, 2023, 2 (05): : 1311 - 1325
  • [36] Model-Agnostic Federated Learning
    Mittone, Gianluca
    Riviera, Walter
    Colonnelli, Iacopo
    Birke, Robert
    Aldinucci, Marco
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 383 - 396
  • [37] Temporal Knowledge Graph Reasoning with Low-rank and Model-agnostic Representations
    Dikeoulias, Ioannis
    Amin, Saadullah
    Neumann, Guenter
    PROCEEDINGS OF THE 7TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP, 2022, : 111 - 120
  • [38] Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing
    Chen, Zhouyuan
    Lian, Zhichao
    Xu, Zhe
    AXIOMS, 2023, 12 (10)
  • [39] Model-Agnostic Private Learning
    Bassily, Raef
    Thakkar, Om
    Thakurta, Abhradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [40] Improving Object Recognition in Crime Scenes via Local Interpretable Model-Agnostic Explanations
    Farhood, Helia
    Saberi, Morteza
    Najafi, Mohammad
    2021 IEEE 25TH INTERNATIONAL ENTERPRISE DISTRIBUTED OBJECT COMPUTING CONFERENCE WORKSHOPS (EDOCW 2021), 2021, : 90 - 94