When Do We Accept Mistakes from Chatbots? The Impact of Human-Like Communication on User Experience in Chatbots That Make Mistakes

被引:22
|
作者
Siqueira, Marianna A. de Sa [1 ]
Muller, Barbara C. N. [1 ]
Bosse, Tibor [1 ]
机构
[1] Radboud Univ Nijmegen, Behav Sci Inst, Fac Social Sci, Nijmegen, Netherlands
关键词
MOTIVATION; CUES;
D O I
10.1080/10447318.2023.2175158
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Chatbots are becoming omnipresent in our daily lives. Despite rapid improvements in natural language processing in the last years, the technology behind chatbots is still not completely mature, and chatbots still make a lot of mistakes during their interactions with users. Since it is not possible to completely prevent mistakes due to technological constraints, this article aims to investigate whether a human-like communication style can reduce the negative impact of chatbots' mistakes on users. Taking a combination of the Technology Acceptance Model and the concepts of Perceived Enjoyment and Social Presence as a theoretical basis, we conducted an online experiment in which participants interacted with a chatbot and completed a survey afterwards. We found that chatbot mistakes have a negative effect on users' perceptions of Ease of Use, Usefulness, Enjoyment, and Social Presence. Human-like communication was found to be effective in reducing the negative impact of mistakes on Perceived Enjoyment. Theoretical and practical implications are discussed.
引用
收藏
页码:2862 / 2872
页数:11
相关论文
共 8 条
  • [1] Will human-like machines make human-like mistakes?
    Livesey, Evan J.
    Goldwater, Micah B.
    Colagiuri, Ben
    BEHAVIORAL AND BRAIN SCIENCES, 2017, 40
  • [2] Do We Make More Mistakes When Working from Home?
    不详
    HARVARD BUSINESS REVIEW, 2023, 101 (1-2) : 22 - 23
  • [3] Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots
    Jiang, Yi
    Yang, Xiangcheng
    Zheng, Tianqi
    COMPUTERS IN HUMAN BEHAVIOR, 2023, 138
  • [4] Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots
    Zhao, Yukun
    Huang, Zhen
    Seligman, Martin
    Peng, Kaiping
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [5] Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots
    Yukun Zhao
    Zhen Huang
    Martin Seligman
    Kaiping Peng
    Scientific Reports, 14
  • [6] Do Not Make the Same Mistakes Again and Again: Learning Local Recovery Policies for Navigation From Human Demonstrations
    Del Duchetto, Francesco
    Kucukyilmaz, Ayse
    Iocchi, Luca
    Hanheide, Marc
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04): : 4084 - 4091
  • [7] How Do AI-driven Chatbots Impact User Experience? Examining Gratifications, Perceived Privacy Risk, Satisfaction, Loyalty, and Continued Use
    Cheng, Yang
    Jiang, Hua
    JOURNAL OF BROADCASTING & ELECTRONIC MEDIA, 2020, 64 (04) : 592 - 614
  • [8] Do we learn from our mistakes? An examination of the impact of negative alcohol-related consequences on college students' drinking patterns and perceptions
    Mallett, KA
    Lee, CM
    Neighbors, C
    Larimer, ME
    Turrisi, R
    JOURNAL OF STUDIES ON ALCOHOL, 2006, 67 (02): : 269 - 276