Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots

被引:51
|
作者
Jiang, Yi [1 ,2 ]
Yang, Xiangcheng [1 ]
Zheng, Tianqi [1 ]
机构
[1] China Univ Geosci, Sch Econ & Management, Wuhan 430078, Peoples R China
[2] China Univ Geosci, Sch Econ & Management, Future City Campus,68 Jincheng St, Wuhan 430078, Peoples R China
关键词
Chatbot; Trust; Human -like cues; Task -technology fit; Ambiguity tolerance; Tailored response; TASK-TECHNOLOGY FIT; ARTIFICIAL-INTELLIGENCE; INTEGRATIVE MODEL; SOCIAL PRESENCE; INFORMATION; SYSTEMS; AMBIGUITY; UNCERTAINTY; PERCEPTIONS; PERSPECTIVE;
D O I
10.1016/j.chb.2022.107485
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
As one of the most popular AI applications, chatbots are creating new ways and value for businesses to interact with their customers, and their adoption and continued use will depend on users' trust. However, due to the non -transparent of AI-related technology and the ambiguity of application boundaries, it is difficult to determine which aspects enhance the adaptation of chatbots and how they interactively affect human trust. Based on the theory of task-technology fit, we developed a research model to investigate how two conversational cues of chatbots, human-like cues and tailored responses, influence human trust toward chatbots and to explore appropriate boundary conditions (individual characteristics and task characteristics) in interacting with chat-bots. One survey and two experiments were performed to test the research model, and the results indicated that (1) perceived task solving competence and social presence mediate the pathway from conversational cues to human trust, which was validated in the context of e-commerce and education; (2) the extent of users' ambiguity tolerance moderates the effects of two conversational cues on social presence; and (3) when performing high -creative tasks, the human-like chatbot induces higher perceived task solving competence. Our findings not only contribute to the AI trust-related literature but also provide practical implications for the development of chatbots and their assignment to individuals and tasks.
引用
收藏
页数:15
相关论文
共 5 条
  • [1] Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots
    Zhao, Yukun
    Huang, Zhen
    Seligman, Martin
    Peng, Kaiping
    SCIENTIFIC REPORTS, 2024, 14 (01)
  • [2] Risk and prosocial behavioural cues elicit human-like response patterns from AI chatbots
    Yukun Zhao
    Zhen Huang
    Martin Seligman
    Kaiping Peng
    Scientific Reports, 14
  • [3] EmotionPush: Emotion and Response Time Prediction towards Human-Like Chatbots
    Huang, Chieh-Yang
    Ku, Lun-Wei
    2018 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2018,
  • [4] When Do We Accept Mistakes from Chatbots? The Impact of Human-Like Communication on User Experience in Chatbots That Make Mistakes
    Siqueira, Marianna A. de Sa
    Muller, Barbara C. N.
    Bosse, Tibor
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (11) : 2862 - 2872
  • [5] On the Construction of more Human-like Chatbots: Affect and Emotion Analysis of Movie Dialogue Data
    Banchs, Rafael E.
    2017 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC 2017), 2017, : 1364 - 1367