Is explainable artificial intelligence intrinsically valuable?

被引:12
|
作者
Colaner, Nathan [1 ]
机构
[1] Seattle Univ, Dept Management, Seattle, WA 98122 USA
关键词
Explainable; XAI; Fairness; Value; Intrinsic; Dignity; DECISION-MAKING; PRIVACY;
D O I
10.1007/s00146-021-01184-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is general consensus that explainable artificial intelligence ("XAI") is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the 'how' question-some version of: 'how do we develop technical strategies to achieve XAI?' Another question is specifying what kind of explanation is worth having in the first place. As difficult and important as the challenges are in answering these questions, they are distinct from a third question: why do we want XAI at all? There is vast literature on this question as well, but I wish to explore a different kind of answer. The most obvious way to answer this question is by describing a desirable outcome that would likely be achieved with the right kind of explanation, which would make the explanation valuable instrumentally. That is, XAI is desirable to attain some other value, such as fairness, trust, accountability, or governance. This family of arguments is obviously important, but I argue that explanations are also intrinsically valuable, because unexplainable systems can be dehumanizing. I argue that there are at least three independently valid versions of this kind of argument: an argument from participation, from knowledge, and from actualization. Each of these arguments that XAI is intrinsically valuable is independently compelling, in addition to the more obvious instrumental benefits of XAI.
引用
下载
收藏
页码:231 / 238
页数:8
相关论文
共 50 条
  • [31] XAI-Explainable artificial intelligence
    Gunning, David
    Stefik, Mark
    Choi, Jaesik
    Miller, Timothy
    Stumpf, Simone
    Yang, Guang-Zhong
    SCIENCE ROBOTICS, 2019, 4 (37)
  • [32] Scientific Exploration and Explainable Artificial Intelligence
    Zednik, Carlos
    Boelsen, Hannes
    MINDS AND MACHINES, 2022, 32 (01) : 219 - 239
  • [33] Explainable artificial intelligence: a comprehensive review
    Minh, Dang
    Wang, H. Xiang
    Li, Y. Fen
    Nguyen, Tan N.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (05) : 3503 - 3568
  • [34] Explainable artificial intelligence: a comprehensive review
    Dang Minh
    H. Xiang Wang
    Y. Fen Li
    Tan N. Nguyen
    Artificial Intelligence Review, 2022, 55 : 3503 - 3568
  • [35] A survey of explainable artificial intelligence decision
    Kong X.
    Tang X.
    Wang Z.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2021, 41 (02): : 524 - 536
  • [36] Fuzzy Networks for Explainable Artificial Intelligence
    Arabikhan, Farzad
    Gegov, Alexander
    Kaymak, Uzay
    Akbari, Negar
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 199 - 200
  • [37] THE JUDICIAL DEMAND FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE
    Deeks, Ashley
    COLUMBIA LAW REVIEW, 2019, 119 (07) : 1829 - 1850
  • [38] Explainable Artificial Intelligence (XAI) in Insurance
    Owens, Emer
    Sheehan, Barry
    Mullins, Martin
    Cunneen, Martin
    Ressel, Juliane
    Castignani, German
    RISKS, 2022, 10 (12)
  • [39] Visualization of explainable artificial intelligence for GeoAI
    Roussel, Cedric
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [40] Explainable artificial intelligence for education and training
    Fiok, Krzysztof
    Farahani, Farzad, V
    Karwowski, Waldemar
    Ahram, Tareq
    JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2022, 19 (02): : 133 - 144