Is explainable artificial intelligence intrinsically valuable?

被引:12
|
作者
Colaner, Nathan [1 ]
机构
[1] Seattle Univ, Dept Management, Seattle, WA 98122 USA
关键词
Explainable; XAI; Fairness; Value; Intrinsic; Dignity; DECISION-MAKING; PRIVACY;
D O I
10.1007/s00146-021-01184-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is general consensus that explainable artificial intelligence ("XAI") is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the 'how' question-some version of: 'how do we develop technical strategies to achieve XAI?' Another question is specifying what kind of explanation is worth having in the first place. As difficult and important as the challenges are in answering these questions, they are distinct from a third question: why do we want XAI at all? There is vast literature on this question as well, but I wish to explore a different kind of answer. The most obvious way to answer this question is by describing a desirable outcome that would likely be achieved with the right kind of explanation, which would make the explanation valuable instrumentally. That is, XAI is desirable to attain some other value, such as fairness, trust, accountability, or governance. This family of arguments is obviously important, but I argue that explanations are also intrinsically valuable, because unexplainable systems can be dehumanizing. I argue that there are at least three independently valid versions of this kind of argument: an argument from participation, from knowledge, and from actualization. Each of these arguments that XAI is intrinsically valuable is independently compelling, in addition to the more obvious instrumental benefits of XAI.
引用
下载
收藏
页码:231 / 238
页数:8
相关论文
共 50 条
  • [1] Is explainable artificial intelligence intrinsically valuable?
    Nathan Colaner
    AI & SOCIETY, 2022, 37 : 231 - 238
  • [2] Explainable artificial intelligence
    Wickramasinghe, Chathurika S.
    Marino, Daniel
    Amarasinghe, Kasun
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [3] Explainable Artificial Intelligence for Kids
    Alonso, Jose M.
    PROCEEDINGS OF THE 11TH CONFERENCE OF THE EUROPEAN SOCIETY FOR FUZZY LOGIC AND TECHNOLOGY (EUSFLAT 2019), 2019, 1 : 134 - 141
  • [4] Explainable and Trustworthy Artificial Intelligence
    Alonso-Moral, Jose Maria
    Mencar, Corrado
    Ishibuchi, Hisao
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 14 - 15
  • [5] Review of Explainable Artificial Intelligence
    Zhao, Yanyu
    Zhao, Xiaoyong
    Wang, Lei
    Wang, Ningning
    Computer Engineering and Applications, 2023, 59 (14) : 1 - 14
  • [6] Explainable artificial intelligence in pathology
    Klauschen, Frederick
    Dippel, Jonas
    Keyl, Philipp
    Jurmeister, Philipp
    Bockmayr, Michael
    Mock, Andreas
    Buchstab, Oliver
    Alber, Maximilian
    Ruff, Lukas
    Montavon, Gregoire
    Mueller, Klaus-Robert
    PATHOLOGIE, 2024, : 133 - 139
  • [7] Explainable and responsible artificial intelligence
    Christian Meske
    Babak Abedin
    Mathias Klier
    Fethi Rabhi
    Electronic Markets, 2022, 32 : 2103 - 2106
  • [8] Explainable Artificial Intelligence in education
    Khosravi H.
    Shum S.B.
    Chen G.
    Conati C.
    Tsai Y.-S.
    Kay J.
    Knight S.
    Martinez-Maldonado R.
    Sadiq S.
    Gašević D.
    Computers and Education: Artificial Intelligence, 2022, 3
  • [9] On the Need of an Explainable Artificial Intelligence
    Zanni-Merk, Cecilia
    INFORMATION SYSTEMS ARCHITECTURE AND TECHNOLOGY, ISAT 2019, PT I, 2020, 1050 : 3 - 3
  • [10] Explainable Artificial Intelligence for Cybersecurity
    Sharma, Deepak Kumar
    Mishra, Jahanavi
    Singh, Aeshit
    Govil, Raghav
    Srivastava, Gautam
    Lin, Jerry Chun-Wei
    COMPUTERS & ELECTRICAL ENGINEERING, 2022, 103