Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence

被引:3
|
作者
Stevens, Alexander F. [1 ]
Stetson, Pete [1 ,2 ]
机构
[1] Mem Sloan Kettering Canc Ctr, Digital Prod & Informat Div, DigITs, New York, NY 10065 USA
[2] Mem Sloan Kettering Canc Ctr, Dept Med, New York, NY USA
基金
美国国家卫生研究院;
关键词
Clinician trust; Trustworthy artificial intelligence/machine; learning (AI/ML); Human computer interaction; Technology acceptance; Digital healthcare; AI/ML adoption; INFORMATION-SYSTEMS SUCCESS; PLS-SEM; USER ACCEPTANCE; UNIFIED THEORY; MODEL; MANAGEMENT; QUALITY; INTENTION; DELONE; HEALTH;
D O I
10.1016/j.jbi.2023.104550
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background: Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI. Objective: Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI. Methods: This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects. Results: In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average. Conclusions: We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance
    Hasija, Abhinav
    Esper, Terry L.
    [J]. JOURNAL OF BUSINESS LOGISTICS, 2022, 43 (03) : 388 - 412
  • [2] Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study
    Shevtsova, Daria
    Ahmed, Anam
    Boot, Iris W. A.
    Sanges, Carmen
    Hudecek, Michael
    Jacobs, John J. L.
    Hort, Simon
    Vrijhoef, Hubertus J. M.
    [J]. JMIR HUMAN FACTORS, 2024, 11
  • [3] More trust or more risk? User acceptance of artificial intelligence virtual assistant
    Xiong, Yiwei
    Shi, Yan
    Pu, Quanlin
    Liu, Na
    [J]. HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, 2024, 34 (03) : 190 - 205
  • [4] Acceptance, initial trust formation, and human biases in artificial intelligence: Focus on clinicians
    Choudhury, Avishek
    Elkefi, Safa
    [J]. FRONTIERS IN DIGITAL HEALTH, 2022, 4
  • [5] Trust in Artificial Intelligence
    Sethumadhavan, Arathi
    [J]. ERGONOMICS IN DESIGN, 2019, 27 (02) : 34 - 34
  • [6] A study of employee acceptance of artificial intelligence technology
    Choi, Youngkeun
    [J]. EUROPEAN JOURNAL OF MANAGEMENT AND BUSINESS ECONOMICS, 2021, 30 (03) : 318 - 330
  • [7] A Study of Customer Acceptance of Artificial Intelligence Technology
    Choi, Youngkeun
    [J]. INTERNATIONAL JOURNAL OF E-BUSINESS RESEARCH, 2023, 19 (01)
  • [8] Clinician Trust in Artificial Intelligence What is Known and How Trust Can Be Facilitated
    Rojas, Juan C.
    Teran, Mario
    Umscheid, Craig A.
    [J]. CRITICAL CARE CLINICS, 2023, 39 (04) : 769 - 782
  • [9] Expectation management in AI: A framework for understanding stakeholder trust and acceptance of artificial intelligence systems
    Kinney, Marjorie
    Anastasiadou, Maria
    Naranjo-Zolotov, Mijail
    Santos, Vitor
    [J]. HELIYON, 2024, 10 (07)
  • [10] Influence of Pedagogical Beliefs and Perceived Trust on Teachers' Acceptance of Educational Artificial Intelligence Tools
    Choi, Seongyune
    Jang, Yeonju
    Kim, Hyeoncheol
    [J]. INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023, 39 (04) : 910 - 922