Using ontologies to enhance human understandability of global post-hoc explanations of black-box models

被引:63
|
作者
Confalonieri, Roberto [1 ]
Weyde, Tillman [2 ]
Besold, Tarek R. [3 ]
Martin, Fermin Moscoso del Prado [4 ]
机构
[1] Free Univ Bozen Bolzano, Fac Comp Sci, I-39100 Bozen Bolzano, Italy
[2] City Univ London, Dept Comp Sci, London EC1V 0HB, England
[3] Neurocat GmbH, Rudower Chaussee 29, D-12489 Berlin, Germany
[4] Lingvist Technol OU, Tallinn, Estonia
关键词
Human-understandable explainable AI; Global explanations; Ontologies; Neural-symbolic learning and reasoning; Knowledge extraction; Concept refinement; INFORMATION-CONTENT; KNOWLEDGE; CLASSIFICATION; WEB;
D O I
10.1016/j.artint.2021.103471
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The interest in explainable artificial intelligence has grown strongly in recent years because of the need to convey safety and trust in the 'how' and 'why' of automated decision-making to users. While a plethora of approaches has been developed, only a few focus on how to use domain knowledge and how this influences the understanding of explanations by users. In this paper, we show that by using ontologies we can improve the human understandability of global post-hoc explanations, presented in the form of decision trees. In particular, we introduce TREPAN Reloaded, which builds on TREPAN, an algorithm that extracts surrogate decision trees from black-box models. TREPAN Reloaded includes ontologies, that model domain knowledge, in the process of extracting explanations to improve their understandability. We tested the understandability of the extracted explanations by humans in a user study with four different tasks. We evaluate the results in terms of response times and correctness, subjective ease of understanding and confidence, and similarity of free text responses. The results show that decision trees generated with TREPAN Reloaded, taking into account domain knowledge, are significantly more understandable throughout than those generated by standard TREPAN. The enhanced understandability of post-hoc explanations is achieved with little compromise on the accuracy with which the surrogate decision trees replicate the behaviour of the original neural network models. (C) 2021 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure
    Novello, Paul
    Fel, Thomas
    Vigouroux, David
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [22] Evaluation of Human-Understandability of Global Model Explanations Using Decision Tree
    Sivaprasad, Adarsa
    Reiter, Ehud
    Tintarev, Nava
    Oren, Nir
    ARTIFICIAL INTELLIGENCE-ECAI 2023 INTERNATIONAL WORKSHOPS, PT 1, XAI3, TACTIFUL, XI-ML, SEDAMI, RAAIT, AI4S, HYDRA, AI4AI, 2023, 2024, 1947 : 43 - 65
  • [23] GLocalX - From Local to Global Explanations of Black Box AI Models
    Setzu, Mattia
    Guidotti, Riccardo
    Monreale, Anna
    Turini, Franco
    Pedreschi, Dino
    Giannotti, Fosca
    ARTIFICIAL INTELLIGENCE, 2021, 294
  • [24] A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods
    Vilone, Giulia
    Longo, Luca
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2021, 4
  • [25] From large language models to small logic programs: building global explanations from disagreeing local post-hoc explainers
    Agiollo, Andrea
    Siebert, Luciano Cavalcante
    Murukannaiah, Pradeep K.
    Omicini, Andrea
    AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2024, 38 (02)
  • [26] Understanding biological timing using mechanistic and black-box models
    Dalchau, Neil
    NEW PHYTOLOGIST, 2012, 193 (04) : 852 - 858
  • [27] Explaining the Unseen: Leveraging XAI to Enhance the Trustworthiness of Black-Box Models in Performance Testing
    Shoemaker, Eric
    Malik, Haroon
    Narman, Husnu
    Chaudri, Jamil
    18TH INTERNATIONAL CONFERENCE ON FUTURE NETWORKS AND COMMUNICATIONS, FNC 2023/20TH INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS AND PERVASIVE COMPUTING, MOBISPC 2023/13TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY, SEIT 2023, 2023, 224 : 83 - 90
  • [28] Rule-based approximation of black-box classifiers for tabular data to generate global and local explanations
    Maszczyk, Cezary
    Kozielski, Michal
    Sikora, Marek
    PROCEEDINGS OF THE 2022 17TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS (FEDCSIS), 2022, : 89 - 92
  • [29] Bayesian Proxy Modelling for Estimating Black Carbon Concentrations using White-Box and Black-Box Models
    Zaidan, Martha A.
    Wraith, Darren
    Boor, Brandon E.
    Hussein, Tareq
    APPLIED SCIENCES-BASEL, 2019, 9 (22):
  • [30] Global sensitivity analyses for test planning with black-box models for Mars Sample Return
    Cataldo, Giuseppe
    Borgonovo, Emanuele
    Siddens, Aaron
    Carpenter, Kevin
    Nado, Martin
    Plischke, Elmar
    RISK ANALYSIS, 2025,