Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making

被引:20
|
作者
Wysocki, Oskar [1 ,2 ]
Davies, Jessica Katharine [3 ]
Vigo, Markel [1 ]
Armstrong, Anne Caroline [3 ]
Landers, Donal [2 ]
Lee, Rebecca [3 ]
Freitas, Andre [1 ,2 ,4 ]
机构
[1] Univ Manchester, Dept Comp Sci, Manchester, England
[2] Univ Manchester, Digital Expt Canc Med Team, CRUK Manchester Inst, Canc Biomarker Ctr, Manchester, England
[3] Univ Manchester, Fac Biol Med & Hlth, Manchester, England
[4] Idiap Res Inst, Martigny, Switzerland
关键词
Explainable model; Explainable AI; ML in healthcare; User study; Clinical decision support; Automation bias; Confirmation bias; Explanation?s impact; AUTOMATION BIAS; ARTIFICIAL-INTELLIGENCE;
D O I
10.1016/j.artint.2022.103839
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation models, when these are pragmatically embedded in the clinical context. Despite the general positive attitude of healthcare professionals (HCPs) towards explanations as a safety and trust mechanism, for a significant set of participants there were negative effects associated with confirmation bias, accentuating model over-reliance and increased effort to interact with the model. Also, contradicting one of its main intended functions, standard explanatory models showed limited ability to support a critical understanding of the limitations of the model. However, we found new significant positive effects which repositions the role of explanations within a clinical context: these include reduction of automation bias, addressing ambiguous clinical cases (cases where HCPs were not certain about their decision) and support of less experienced HCPs in the acquisition of new domain knowledge.(c) 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
引用
收藏
页数:17
相关论文
共 18 条
  • [1] Human Control and Discretion in AI-driven Decision-making in Government
    Mitrou, Lilian
    Janssen, Marijn
    Loukis, Euripidis
    [J]. 14TH INTERNATIONAL CONFERENCE ON THEORY AND PRACTICE OF ELECTRONIC GOVERNANCE (ICEGOV 2021), 2021, : 10 - 16
  • [2] WHOM WE TRUST MORE: AI-DRIVEN VS. HUMAN-DRIVEN ECONOMIC DECISION-MAKING
    Vinokurov, Fedor N.
    Sadovskaya, Ekaterina D.
    [J]. EKSPERIMENTALNAYA PSIKHOLOGIYA, 2023, 16 (02): : 87 - 100
  • [3] Bridging the Gap Between AI and Explainability in the GDPR: Towards Trustworthiness-by-Design in Automated Decision-Making
    Hamon, Ronan
    Junklewitz, Henrik
    Sanchez, Ignacio
    Malgieri, Gianclaudio
    De Hert, Paul
    [J]. IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2022, 17 (01) : 72 - 85
  • [4] Guest Editorial AI-Driven Decision-Making: Managerial and Organizational Promise and Potential
    Maleh, Yassine
    El-Latif, Ahmed A. Abd
    Zhang, Justin
    [J]. IEEE Engineering Management Review, 2023, 51 (01): : 11 - 15
  • [5] AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians' and midwives' perspectives on integrating AI-driven CTG into clinical decision making
    Dlugatch, Rachel
    Georgieva, Antoniya
    Kerasidou, Angeliki
    [J]. BMC MEDICAL ETHICS, 2024, 25 (01)
  • [6] AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians’ and midwives’ perspectives on integrating AI-driven CTG into clinical decision making
    Rachel Dlugatch
    Antoniya Georgieva
    Angeliki Kerasidou
    [J]. BMC Medical Ethics, 25
  • [7] Exploring explainable AI in pharmaceutical decision-making : Bridging the gap between black box models and clinical insights
    Ghorpade, V. S.
    Jadhav, Pradnya A.
    Jadhav, R. S.
    Gujar, Satish N.
    Ashok, Wankhede Vishal
    Pandit, Shraddha V.
    [J]. JOURNAL OF STATISTICS AND MANAGEMENT SYSTEMS, 2024, 27 (02) : 225 - 236
  • [8] AI-Driven Risk Management and Sustainable Decision-Making: Role of Perceived Environmental Responsibility
    Khalid, Jamshed
    Chuanmin, Mi
    Altaf, Fasiha
    Shafqat, Muhammad Mobeen
    Khan, Shahid Kalim
    Ashraf, Muhammad Umair
    [J]. SUSTAINABILITY, 2024, 16 (16)
  • [9] Unlocking Business Value: Integrating AI-Driven Decision-Making in Financial Reporting Systems
    Artene, Alin Emanuel
    Domil, Aura Emanuela
    Ivascu, Larisa
    [J]. ELECTRONICS, 2024, 13 (15)
  • [10] Between risk mitigation and labour rights enforcement: Assessing the transatlantic race to govern AI-driven decision-making through a comparative lens
    Aloisi, Antonio
    De Stefano, Valerio
    [J]. EUROPEAN LABOUR LAW JOURNAL, 2023, 14 (02) : 283 - 307