Semantic Reasoning from Model-Agnostic Explanations

被引:3
|
作者
Perdih, Timen Stepisnik [1 ]
Lavrac, Nada [1 ,2 ]
Skrlj, Blaz [3 ]
机构
[1] Jozef Stefan Inst, Ljubljana, Slovenia
[2] Univ Nova Gorica, Nova Gorica, Slovenia
[3] Jozef Stefan Inst, Jozef Stefan Int Postgrad Sch, Ljubljana, Slovenia
关键词
model explanations; reasoning; generalization; SHAP; machine learning; explainable AI;
D O I
10.1109/SAMI50585.2021.9378668
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the wide adoption of black-box models, instance-based post hoc explanation tools, such as LIME and SHAP became increasingly popular. These tools produce explanations, pinpointing contributions of key features associated with a given prediction. However, the obtained explanations remain at the raw feature level and are not necessarily understandable by a human expert without extensive domain knowledge. We propose ReEx (Reasoning with Explanations), a method applicable to explanations generated by arbitrary instance-level explainers, such as SHAP. By using background knowledge in the form of on-tologies, ReEx generalizes instance explanations in a least general generalization-like manner. The resulting symbolic descriptions are specific for individual classes and offer generalizations based on the explainer's output. The derived semantic explanations are potentially more informative, as they describe the key attributes in the context of more general background knowledge, e.g., at the biological process level. We showcase ReEx's performance on nine biological data sets, showing that compact, semantic explanations can be obtained and are more informative than generic ontology mappings that link terms directly to feature names. ReEx is offered as a simple-to-use Python library and is compatible with tools such as SHAP and similar. To our knowledge, this is one of the first methods that directly couples semantic reasoning with contemporary model explanation methods.
引用
收藏
页码:105 / 110
页数:6
相关论文
共 50 条
  • [41] Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
    Kumarakulasinghe, Nesaretnam Barr
    Blomberg, Tobias
    Lin, Jintai
    Leao, Alexandra Saraiva
    Papapetrou, Panagiotis
    2020 IEEE 33RD INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS(CBMS 2020), 2020, : 7 - 12
  • [42] Explain the Explainer: Interpreting Model-Agnostic Counterfactual Explanations of a Deep Reinforcement Learning Agent
    Chen Z.
    Silvestri F.
    Tolomei G.
    Wang J.
    Zhu H.
    Ahn H.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (04): : 1443 - 1457
  • [43] Model-agnostic counterfactual reasoning for identifying and mitigating answer bias in knowledge tracing
    Cui, Chaoran
    Ma, Hebo
    Dong, Xiaolin
    Zhang, Chen
    Zhang, Chunyun
    Yao, Yumo
    Chen, Meng
    Ma, Yuling
    NEURAL NETWORKS, 2024, 178
  • [44] Generalizable model-agnostic semantic segmentation via target-specific normalization
    Zhang, Jian
    Qi, Lei
    Shi, Yinghuan
    Gao, Yang
    PATTERN RECOGNITION, 2022, 122
  • [45] From local counterfactuals to global feature importance: efficient, robust, and model-agnostic explanations for brain connectivity networks
    Alfeo, Antonio Luca
    Zippo, Antonio G.
    Catrambone, Vincenzo
    Cimino, Mario G. C. A.
    Toschi, Nicola
    Valenza, Gaetano
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 236
  • [46] Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?
    Chen, Lisha
    Chen, Tianyi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [47] Pleural effusion diagnosis using local interpretable model-agnostic explanations and convolutional neural network
    Nguyen H.T.
    Nguyen C.N.T.
    Phan T.M.N.
    Dao T.C.
    IEIE Transactions on Smart Processing and Computing, 2021, 10 (02): : 101 - 108
  • [48] ASTERYX: A model-Agnostic SaT-basEd appRoach for sYmbolic and score-based eXplanations
    Boumazouza, Ryma
    Cheikh-Alili, Fahima
    Mazure, Bertrand
    Tabia, Karim
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 120 - 129
  • [49] Applying local interpretable model-agnostic explanations to identify substructures that are responsible for mutagenicity of chemical compounds
    Rosa, Lucca Caiaffa Santos
    Pimentel, Andre Silva
    MOLECULAR SYSTEMS DESIGN & ENGINEERING, 2024, 9 (09): : 920 - 936
  • [50] "I do not know! but why?"- Local model-agnostic example-based explanations of reject
    Artelt, Andre
    Visser, Roel
    Hammer, Barbara
    NEUROCOMPUTING, 2023, 558