Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence

被引:3
|
作者
Stevens, Alexander F. [1 ]
Stetson, Pete [1 ,2 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Digital Prod & Informat Div, DigITs, New York, NY 10065 USA
[2] Mem Sloan Kettering Canc Ctr, Dept Med, New York, NY USA
基金
美国国家卫生研究院;
关键词
Clinician trust; Trustworthy artificial intelligence/machine; learning (AI/ML); Human computer interaction; Technology acceptance; Digital healthcare; AI/ML adoption; INFORMATION-SYSTEMS SUCCESS; PLS-SEM; USER ACCEPTANCE; UNIFIED THEORY; MODEL; MANAGEMENT; QUALITY; INTENTION; DELONE; HEALTH;
D O I
10.1016/j.jbi.2023.104550
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI. Objective: Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI. Methods: This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects. Results: In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average. Conclusions: We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] An empirical evaluation of technology acceptance model for Artificial Intelligence in E-commerce
    Wang, Chenxing
    Ahmad, Sayed Fayaz
    Ayassrah, Ahmad Y. A. Bani Ahmad
    Awwad, Emad Mahrous
    Irshad, Muhammad
    Ali, Yasser A.
    Al-Razgan, Muna
    Khan, Yasser
    Han, Heesup
    [J]. HELIYON, 2023, 9 (08)
  • [42] A longitudinal model of continued acceptance of conversational artificial intelligence
    Ng, Yu-Leung
    [J]. INFORMATION TECHNOLOGY & PEOPLE, 2024,
  • [43] ACCEPTANCE OF ARTIFICIAL INTELLIGENCE AUGMENTED SYSTEMATIC REVIEWS BY HEALTH TECHNOLOGY ASSESSMENT BODIES
    Umapathi, K.
    Nevis, I
    [J]. VALUE IN HEALTH, 2024, 27 (06) : S272 - S273
  • [44] Applying the technology acceptance model for artificial intelligence tools in routine obstetric ultrasound
    Minopoli, M.
    Lambton, B.
    Younger, A.
    Dall'Asta, A.
    Papageorghiou, A. T.
    [J]. ULTRASOUND IN OBSTETRICS & GYNECOLOGY, 2023, 62 : 109 - 109
  • [45] Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance
    Koenig, Pascal D.
    [J]. AI & SOCIETY, 2024,
  • [46] Theorizing artificial intelligence acceptance and digital entrepreneurship model
    Upadhyay, Nitin
    Upadhyay, Shalini
    Dwivedi, Yogesh K.
    [J]. INTERNATIONAL JOURNAL OF ENTREPRENEURIAL BEHAVIOR & RESEARCH, 2022, 28 (05): : 1138 - 1166
  • [47] Comprehension, apprehension, and acceptance: Understanding the influence of literacy and anxiety on acceptance of artificial Intelligence
    Schiavo, Gianluca
    Businaro, Stefano
    Zancanaro, Massimo
    [J]. TECHNOLOGY IN SOCIETY, 2024, 77
  • [48] Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work
    Theis, Sabine
    Jentzsch, Sophie
    Deligiannaki, Fotini
    Berro, Charles
    Raulf, Arne Peter
    Bruder, Carmen
    [J]. ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I, 2023, 14050 : 355 - 380
  • [49] Artificial intelligence acceptance in services: connecting with Generation Z
    Vitezic, Vanja
    Peric, Marko
    [J]. SERVICE INDUSTRIES JOURNAL, 2021, 41 (13-14): : 926 - 946
  • [50] Why we cannot trust artificial intelligence in medicine
    DeCamp, Matthew
    Tilburt, Jon C.
    [J]. LANCET DIGITAL HEALTH, 2019, 1 (08): : E390 - E390