Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review

被引:0
|
作者
Haomin Chen
Catalina Gomez
Chien-Ming Huang
Mathias Unberath
机构
[1] Johns Hopkins University,Department of Computer Science
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
引用
收藏
相关论文
共 50 条
  • [31] Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation
    Zhu, Jichen
    Liapis, Antonios
    Risi, Sebastian
    Bidarra, Rafael
    Youngblood, G. Michael
    PROCEEDINGS OF THE 2018 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG'18), 2018, : 458 - 465
  • [32] Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)
    Ehsan, Upol
    Watkins, Elizabeth Anne
    Wintersberger, Philipp
    Manger, Carina
    Kim, Sunnie S. Y.
    Van Berkel, Niels
    Riener, Andreas
    Riedl, Mark O.
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [33] Human-Centered AI for Manufacturing - Design Principles for Industrial AI-Based Services
    Kutz, Janika
    Neuhuttler, Jens
    Bienzeisler, Bernd
    Spilski, Jan
    Lachmann, Thomas
    ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I, 2023, 14050 : 115 - 130
  • [35] Design of Human-Centered Adaptive Support Tools to Improve Workability in Older Workers. A Field of Research of Human-Centered AI
    Abril-Jimenez, Patricia
    Fernanda Cabrera-Umpierrez, Maria
    Gonzalez, Sergio
    Carreton, Rosa
    Claassen, Ginger
    Arredondo Waldmeyer, Maria Teresa
    UNIVERSAL ACCESS IN HUMAN-COMPUTER INTERACTION: NOVEL DESIGN APPROACHES AND TECHNOLOGIES, UAHCI 2022, PT I, 2022, 13308 : 177 - 187
  • [36] AI-Assisted Causal Pathway Diagram for Human-Centered Design
    Zhong, Ruican
    Shin, Donghoon
    Meza, Rosemary
    Klasnja, Predrag
    Colusso, Lucas
    Hsieh, Gary
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024, 2024,
  • [37] Evaluating How Explainable AI Is Perceived in the Medical Domain: A Human-Centered Quantitative Study of XAI in Chest X-Ray Diagnostics
    Karagoz, Gizem
    van Kollenburg, Geert
    Ozcelebi, Tanir
    Meratnia, Nirvana
    TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR HEALTHCARE, TAI4H 2024, 2024, 14812 : 92 - 108
  • [38] A Human-Centered Systematic Literature Review of Cyberbullying Detection Algorithms
    Kim, Seunghyun
    Razi, Afsaneh
    Stringhini, Gianluca
    Wisniewski, Pamela J.
    De Choudhury, Munmun
    Proceedings of the ACM on Human-Computer Interaction, 2021, 5 (CSCW2)
  • [39] XEdgeAI: A human-centered industrial inspection framework with data-centric Explainable Edge AI approach
    Nguyen, Hung Truong Thanh
    Nguyen, Loc Phuc Truong
    Cao, Hung
    INFORMATION FUSION, 2025, 116
  • [40] Explainable human-centered traits from head motion and facial expression dynamics
    Madan, Surbhi
    Gahalawat, Monika
    Guha, Tanaya
    Goecke, Roland
    Subramanian, Ramanathan
    PLOS ONE, 2025, 20 (01):