The existence of manual mode increases human blame for AI mistakes

被引:0
|
作者
Arnestad, Mads N. [1 ]
Meyers, Samuel [2 ]
Gray, Kurt [3 ]
Bigman, Yochanan E. [2 ]
机构
[1] BI Norwegian Business Sch, Dept Leadership & Org, Oslo, Norway
[2] Hebrew Univ Jerusalem, Jerusalem, Israel
[3] Univ North Carolina Chapel Hill, Chapel Hill, NC USA
关键词
Morality; Autonomous machines; Manual mode; Blame; Control; AI ethics; CULPABLE CONTROL; PSYCHOLOGY; SUPPORT; RESPONSIBILITY; AUTONOMY; PEOPLE; OTHERS; CAUSAL;
D O I
10.1016/j.cognition.2024.105931
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
People are offloading many tasks to artificial intelligence (AI)-including driving, investing decisions, and medical choices-but it is human nature to want to maintain ultimate control. So even when using autonomous machines, people want a "manual mode", an option that shifts control back to themselves. Unfortunately, the mere existence of manual mode leads to more human blame when AI makes mistakes. When observers know that a human agent theoretically had the option to take control, the humans are assigned more responsibility, even when agents lack the time or ability to actually exert control, as with self-driving car crashes. Four experiments reveal that though people prefer having a manual mode, even if the AI mode is more efficient and adding the manual mode is more expensive (Study 1), the existence of a manual mode increases human blame (Studies 2a3c). We examine two mediators for this effect: increased perceptions of causation and counterfactual cognition (Study 4). The results suggest that the human thirst for illusory control comes with real costs. Implications of AI decision-making are discussed.
引用
收藏
页数:23
相关论文
共 25 条
  • [1] It's the AI's fault, not mine: Mind perception increases blame attribution to AI
    Joo, Minjoo
    PLOS ONE, 2024, 19 (12):
  • [2] When both human and machine drivers make mistakes: Whom to blame?
    Zhai, Siming
    Gao, Shan
    Wang, Lin
    Liu, Peng
    TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2023, 170
  • [3] 2 PHILOSOPHICAL MISTAKES + HUMAN-NATURE AND EXISTENCE
    ADLER, MJ
    NEW SCHOLASTICISM, 1985, 59 (01): : 1 - 20
  • [4] To err is human; to be perfect is AI: embracing mistakes as a catalyst for human formation development
    Deguma, Melona C.
    JOURNAL OF PUBLIC HEALTH, 2023, 46 (01) : e215 - e216
  • [5] Less human, more to blame: Animalizing poor people increases blame and decreases support for wealth redistribution
    Sainz, Mario
    Martinez, Rocio
    Sutton, Robbie M.
    Rodriguez-Bailon, Rosa
    Moya, Miguel
    GROUP PROCESSES & INTERGROUP RELATIONS, 2020, 23 (04) : 546 - 559
  • [6] When AI Fails, Who Do We Blame? Attributing Responsibility in Human-AI Interactions
    Schoenherr, Jordan Richard
    Thomson, Robert
    IEEE Transactions on Technology and Society, 2024, 5 (01): : 61 - 70
  • [7] Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches
    Zhang, Yue
    Tuk, Mirjam A.
    Klesse, Anne-Kathrin
    JOURNAL OF THE ASSOCIATION FOR CONSUMER RESEARCH, 2024, 9 (03) : 344 - 356
  • [8] The scientific outlook on development and changes in the mode of human existence
    Chen Xueming
    Luo Qian
    SOCIAL SCIENCES IN CHINA, 2009, 30 (01) : 54 - 67
  • [9] Who Is to Blame? Responsibility Attribution in AI Systems vs Human Agents in the Field of Air Crashes
    Gomez-Sanchez, Jesica
    Gordo, Cristina
    Franklin, Matija
    Fernandez-Basso, Carlos
    Lagnado, David
    FLEXIBLE QUERY ANSWERING SYSTEMS, FQAS 2023, 2023, 14113 : 256 - 264
  • [10] AI-determined similarity increases likability and trustworthiness of human voices
    Jaggy, Oliver
    Schwan, Stephan
    Meyerhoff, Hauke S.
    PLOS ONE, 2025, 20 (03):