The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

被引:101
|
作者
Paez, Andres [1 ]
机构
[1] Univ Andes, Dept Philosophy, Carrera 1 18A-12 G-533, Bogota 111711, DC, Colombia
关键词
Explainable artificial intelligence; Understanding; Explanation; Model transparency; Post-hoc interpretability; Machine learning; Black box models; EXPLANATION; KNOWLEDGE;
D O I
10.1007/s11023-019-09502-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature.
引用
收藏
页码:441 / 459
页数:19
相关论文
共 50 条
  • [1] The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
    Andrés Páez
    [J]. Minds and Machines, 2019, 29 : 441 - 459
  • [2] XAI-Explainable artificial intelligence
    Gunning, David
    Stefik, Mark
    Choi, Jaesik
    Miller, Timothy
    Stumpf, Simone
    Yang, Guang-Zhong
    [J]. SCIENCE ROBOTICS, 2019, 4 (37)
  • [3] Explainable Artificial Intelligence (XAI) in auditing
    Zhang, Chanyuan
    Cho, Soohyun
    Vasarhelyi, Miklos
    [J]. INTERNATIONAL JOURNAL OF ACCOUNTING INFORMATION SYSTEMS, 2022, 46
  • [4] Explainable Artificial Intelligence (XAI) in Insurance
    Owens, Emer
    Sheehan, Barry
    Mullins, Martin
    Cunneen, Martin
    Ressel, Juliane
    Castignani, German
    [J]. RISKS, 2022, 10 (12)
  • [5] A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI
    Tjoa, Erico
    Guan, Cuntai
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (11) : 4793 - 4813
  • [6] A Review of Trustworthy and Explainable Artificial Intelligence (XAI)
    Chamola, Vinay
    Hassija, Vikas
    Sulthana, A. Razia
    Ghosh, Debshishu
    Dhingra, Divyansh
    Sikdar, Biplab
    [J]. IEEE ACCESS, 2023, 11 : 78994 - 79015
  • [8] Evaluation Metrics in Explainable Artificial Intelligence (XAI)
    Coroama, Loredana
    Groza, Adrian
    [J]. ADVANCED RESEARCH IN TECHNOLOGIES, INFORMATION, INNOVATION AND SUSTAINABILITY, ARTIIS 2022, PT I, 2022, 1675 : 401 - 413
  • [9] Special issue on Explainable Artificial Intelligence (XAI)
    Miller, Tim
    Hoffman, Robert
    Amir, Ofra
    Holzinger, Andreas
    [J]. ARTIFICIAL INTELLIGENCE, 2022, 307
  • [10] Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
    Kok, Ibrahim
    Okay, Feyza Yildirim
    Muyanli, Ozgecan
    Ozdemir, Suat
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (16): : 14764 - 14779