Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque

被引:0
|
作者
Uwe Peters
机构
[1] University of Cambridge,Leverhulme Centre for the Future of Intelligence
[2] University of Bonn,Center for Science and Thought
来源
AI and Ethics | 2023年 / 3卷 / 3期
关键词
Artificial intelligence; Algorithms; Decision-making; Opacity; Mindshaping;
D O I
10.1007/s43681-022-00217-w
中图分类号
学科分类号
摘要
Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.
引用
收藏
页码:963 / 974
页数:11
相关论文
共 50 条
  • [1] Who Made That Decision and Why? Users' Perceptions of Human Versus AI Decision-Making and the Power of Explainable-AI
    Shulner-Tal, Avital
    Kuflik, Tsvi
    Kliger, Doron
    Mancini, Azzurra
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024,
  • [2] Explainable AI for enhanced decision-making
    Coussement, Kristof
    Abedin, Mohammad Zoynul
    Kraus, Mathias
    Maldonado, Sebastian
    Topuz, Kazim
    [J]. DECISION SUPPORT SYSTEMS, 2024, 184
  • [3] Effects of Explanation Strategy and Autonomy of Explainable AI on Human-AI Collaborative Decision-making
    Wang, Bingcheng
    Yuan, Tianyi
    Rau, Pei-Luen Patrick
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (04) : 791 - 810
  • [4] Enhancing medical decision-making with ChatGPT and explainable AI
    Chopra, Aryan
    Rajput, Dharmendra Singh
    Patel, Harshita
    [J]. INTERNATIONAL JOURNAL OF SURGERY, 2024, 110 (08) : 5167 - 5168
  • [5] AI employment decision-making: integrating the equal opportunity merit principle and explainable AI
    Chan, Gary K. Y.
    [J]. AI & SOCIETY, 2024, 39 (03) : 1027 - 1038
  • [6] Increasing Transparency in Algorithmic- Decision-Making with Explainable AI
    Bernhard Waltl
    Roland Vogl
    [J]. Datenschutz und Datensicherheit - DuD, 2018, 42 (10) : 613 - 617
  • [7] Creative Explainable AI Tools to Understand Algorithmic Decision-Making
    Bhat, Maalvika
    [J]. PROCEEDINGS OF THE 16TH CONFERENCE ON CREATIVITY AND COGNITION, C&C 2024, 2024, : 10 - 16
  • [8] The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making
    de Bruijn, Hans
    Warnier, Martijn
    Janssen, Marijn
    [J]. GOVERNMENT INFORMATION QUARTERLY, 2022, 39 (02)
  • [9] A Meta-Analysis of the Utility of Explainable Artificial Intelligence in Human-AI Decision-Making
    Schemmer, Max
    Hemmer, Patrick
    Nitsche, Maximilian
    Kuehl, Niklas
    Voessing, Michael
    [J]. PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 617 - 626
  • [10] Analyzing Employee Attrition Using Explainable AI for Strategic HR Decision-Making
    Diaz, Gabriel Marin
    Hernandez, Jose Javier Galan
    Salvador, Jose Luis Galdon
    [J]. MATHEMATICS, 2023, 11 (22)