(Why) Do We Trust AI?: A Case of AI-based Health Chatbots

被引:0
|
作者
Prakash, Ashish Viswanath [1 ]
Das, Saini [2 ]
机构
[1] Indian Inst Management, Tiruchirappalli, Tamil Nadu, India
[2] Indian Inst Technol Kharagpur, Kharagpur, India
关键词
Artificial Intelligence; Health Chatbot; Trust in Technology; Explainability; Contextualization; Free Simulation Experiment; ANTHROPOMORPHISM INCREASES TRUST; COMMON METHOD VARIANCE; SERVICE QUALITY; ARTIFICIAL-INTELLIGENCE; E-COMMERCE; INFORMATION QUALITY; USER ADOPTION; BLACK-BOX; RISK; IMPACT;
D O I
10.3127/ajis.v28i0.4235
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for SelfDiagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers' trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers' trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers' trust in and adoption of AICSD.
引用
下载
收藏
页数:43
相关论文
共 50 条
  • [21] Do AI Chatbots Incite Harmful Behaviours in Mental Health Patients?
    Patel, Harikrishna
    Hussain, Faiza
    BJPSYCH OPEN, 2024, 10
  • [22] AI Chatbots in Digital Mental Health
    Balcombe, Luke
    INFORMATICS-BASEL, 2023, 10 (04):
  • [23] Can we trust AI chatbots' answers about disease diagnosis and patient care?
    Huh, Sun
    JOURNAL OF THE KOREAN MEDICAL ASSOCIATION, 2023, 66 (04): : 218 - 222
  • [24] Do AI Chatbots Incite Harmful Behaviours in Mental Health Patients?
    Patel, Harikrishna
    Hussain, Faiza
    BJPSYCH OPEN, 2024, 10 : S70 - S71
  • [25] An Instrument for Measuring Teachers' Trust in AI-Based Educational Technology
    Nazaretsky, Tanya
    Cukurova, Mutlu
    Alexandron, Giora
    LAK22 CONFERENCE PROCEEDINGS: THE TWELFTH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE, 2022, : 56 - 66
  • [26] Case Study: AI-Based Autonomous Control
    Kanokogi, Hiroaki
    InTech, 2022, 69 (05) : 23 - 25
  • [27] Can We Trust AI?
    Herman, Liz
    Chellappa, Rama
    Niiler, Eric
    TECHNICAL COMMUNICATION, 2023, 70 (03)
  • [28] The promises and challenges of AI-based chatbots in language education through the lens of learner emotions
    Xiao, Yuehai
    Zhang, Tianyu
    He, Jingyi
    HELIYON, 2024, 10 (18)
  • [29] Coping with vulnerability: the effect of trust in AI and privacy-protective behaviour on the use of AI-based services
    Jang, Changki
    BEHAVIOUR & INFORMATION TECHNOLOGY, 2024, 43 (11) : 2388 - 2400
  • [30] Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care
    Sivaraman, Venkatesh
    Bukowski, Leigh A.
    Levin, Joel
    Kahn, Jeremy M.
    Perer, Adam
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2023, 2023,