Quod erat demonstrandum?- Towards a typology of the concept of explanation for the design of explainable AI

被引:45
|
作者
Cabitza, Federico [1 ,2 ]
Campagner, Andrea [1 ]
Malgieri, Gianclaudio [3 ,4 ]
Natali, Chiara [1 ]
Schneeberger, David [5 ]
Stoeger, Karl [5 ]
Holzinger, Andreas [6 ]
机构
[1] Univ Milano Bicocca, DISCo, viale Sarca 336, I-20126 Milan, Italy
[2] IRCCS Orthoped Inst Galeazzi, via Galeazzi, 4, I-20161 Milan, Italy
[3] EDHEC Business Sch, Augmented Law Inst, 24 Ave Gustave Delory, CS 50411, F-59057 Roubaix 1, France
[4] Leiden Univ, eLaw, Rapenburg 70, NL-2311 Leiden, EZ, Netherlands
[5] Univ Vienna, Schottenbastei 10-16, A-1010 Vienna, Austria
[6] Univ Nat Resources & Life Sci Vienna, Peter Jordan Str 82, A-1190 Vienna, Austria
基金
奥地利科学基金会;
关键词
Explainable AI; XAI; Explanations; Taxonomy; Artificial intelligence; Machine learning; AUTOMATED DECISION-MAKING; BLACK-BOX; MACHINE; QUALITY;
D O I
10.1016/j.eswa.2022.118888
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.
引用
收藏
页数:16
相关论文
共 3 条
  • [1] Quod erat demonstrandum? A Reaction to Geiger and Schreyogg: 'Coping with the Concept of Knowledge: Toward a Discursive Understanding of Knowledge'
    Schneider, Ursula
    MANAGEMENT LEARNING, 2009, 40 (04) : 481 - 485
  • [2] Towards Explainable AI: Interpreting Soil Organic Carbon Prediction Models Using a Learning-Based Explanation Method
    Kakhani, Nafiseh
    Taghizadeh-Mehrjardi, Ruhollah
    Omarzadeh, Davoud
    Ryo, Masahiro
    Heiden, Uta
    Scholten, Thomas
    EUROPEAN JOURNAL OF SOIL SCIENCE, 2025, 76 (02)
  • [3] Towards More Explainable and Traceable AI: Gray-boxed Design in a Case of Microservice Allocation
    Garcia, Jorge Jimenez
    Ubeda, Ignacio Lacalle
    Szmejat, Pawel
    Wasielewska-Michniewska, Katarzyna
    Ganzha, Maria
    Salvador, Carlos E. Palau
    Badica, Costin
    Fidanova, Stefka
    Paprzycki, Marcin
    2024 INTERNATIONAL CONFERENCE ON INNOVATIONS IN INTELLIGENT SYSTEMS AND APPLICATIONS, INISTA, 2024,