Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque

被引:0
|
作者
Uwe Peters
机构
[1] University of Cambridge,Leverhulme Centre for the Future of Intelligence
[2] University of Bonn,Center for Science and Thought
来源
AI and Ethics | 2023年 / 3卷 / 3期
关键词
Artificial intelligence; Algorithms; Decision-making; Opacity; Mindshaping;
D O I
10.1007/s43681-022-00217-w
中图分类号
学科分类号
摘要
Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument overlooks that human decision-making is sometimes significantly more transparent and trustworthy than algorithmic decision-making. This is because when people explain their decisions by giving reasons for them, this frequently prompts those giving the reasons to govern or regulate themselves so as to think and act in ways that confirm their reason reports. AI explanation systems lack this self-regulative feature. Overlooking it when comparing algorithmic and human decision-making can result in underestimations of the transparency of human decision-making and in the development of explainable AI that may mislead people by activating generally warranted beliefs about the regulative dimension of reason-giving.
引用
收藏
页码:963 / 974
页数:11
相关论文
共 50 条
  • [31] Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
    Chen, Valerie
    Liao, Q. Vera
    Wortman Vaughan, Jennifer
    Bansal, Gagan
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW2)
  • [32] Infosphere, Datafication, and Decision-Making Processes in the AI Era
    Andrea Lavazza
    Mirko Farina
    [J]. Topoi, 2023, 42 : 843 - 856
  • [33] An AI decision-making framework for business value maximization
    Gudigantala, Naveen
    Madhavaram, Sreedhar
    Bicen, Pelin
    [J]. AI MAGAZINE, 2023, 44 (01) : 67 - 84
  • [34] How to Use AI Ethically for Ethical Decision-Making
    Demaree-Cotton, Joanna
    Earp, Brian D.
    Savulescu, Julian
    [J]. AMERICAN JOURNAL OF BIOETHICS, 2022, 22 (07): : 1 - 3
  • [35] The application of AI in digital HRM - an experiment on human decision-making in personnel selection
    Malin, Christine Dagmar
    Fleiss, Juergen
    Seeber, Isabella
    Kubicek, Bettina
    Kupfer, Cordula
    Thalmann, Stefan
    [J]. BUSINESS PROCESS MANAGEMENT JOURNAL, 2024,
  • [36] Three Challenges for AI-Assisted Decision-Making
    Steyvers, Mark
    Kumar, Aakriti
    [J]. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE, 2024, 19 (05) : 722 - 734
  • [37] Effects of AI ChatGPT on travelers' travel decision-making
    Kim, Jeong Hyun
    Kim, Jungkeun
    Kim, Seongseop
    Hailu, Tadesse Bekele
    [J]. TOURISM REVIEW, 2024, 79 (05) : 1038 - 1057
  • [38] DECISION-MAKING FOR FLEXIBLE MANUFACTURING - OR AND OR AI APPROACHES IN SCHEDULING
    TAMURA, H
    YAMAGATA, K
    HATONO, I
    [J]. SYSTEMS ANALYSIS MODELLING SIMULATION, 1989, 6 (05): : 363 - 371
  • [39] Infosphere, Datafication, and Decision-Making Processes in the AI Era
    Lavazza, Andrea
    Farina, Mirko
    [J]. TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2023, 42 (03): : 843 - 856
  • [40] Understanding User Reliance on AI in Assisted Decision-Making
    Cao, Shiye
    Huang, Chien-Ming
    [J]. Proceedings of the ACM on Human-Computer Interaction, 2022, 6 (CSCW2)