Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review

被引:0
|
作者
Haomin Chen
Catalina Gomez
Chien-Ming Huang
Mathias Unberath
机构
[1] Johns Hopkins University,Department of Computer Science
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
引用
收藏
相关论文
共 50 条
  • [41] From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
    Nauta, Meike
    Trienes, Jan
    Pathak, Shreyasi
    Nguyen, Elisa
    Peters, Michelle
    Schmitt, Yasmin
    Schloetterer, Joerg
    Van Keulen, Maurice
    Seifert, Christin
    ACM COMPUTING SURVEYS, 2023, 55 (13S)
  • [42] Divergent and convergent thinking processes in smart cities: A systematic review of human-centered design practices
    Ahmadzai, Palwasha
    CITIES, 2025, 159
  • [43] The implications of EEG neurophysiological data in human-centered architectural design: A systematic review and bibliometric analysis
    Zhao, Mingming
    Crossley, Tatjana
    Shinohara, Hiroyuki
    JOURNAL OF ENVIRONMENTAL PSYCHOLOGY, 2025, 103
  • [44] Human-Centered Design of Mobile Health Apps for Older Adults: Systematic Review and Narrative Synthesis
    Nimmanterdwong, Zethapong
    Boonviriya, Suchaya
    Tangkijvanich, Pisit
    JMIR MHEALTH AND UHEALTH, 2022, 10 (01):
  • [45] 'Talking with your Car' :Design of Human-Centered Conversational AI in Autonomous Vehicles
    Rege, Akshay
    Currano, Rebecca
    Sirkin, David
    Kim, Euiyoung
    PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTOMOTIVE USER INTERFACES AND INTERACTIVE VEHICULAR APPLICATIONS, AUTOMOTIVEUI 2024, 2024, : 338 - 349
  • [46] tachAId-An interactive tool supporting the design of human-centered AI solutions
    Bauroth, Max
    Rath-Manakidis, Pavlos
    Langholf, Valentin
    Wiskott, Laurenz
    Glasmachers, Tobias
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [47] Toward human-centered AI: A perspective from human-computer interaction
    Xu W.
    Interactions, 2019, 26 (04): : 42 - 46
  • [48] Towards Human-Centered Design of AI Service Chatbots: Defining the Building Blocks
    Hartikainen, Maria
    Vaananen, Kaisa
    ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT II, 2023, 14051 : 68 - 87
  • [49] A Review of Immersive Technologies, Knowledge Representation, and AI for Human-Centered Digital Experiences
    Partarakis, Nikolaos
    Zabulis, Xenophon
    ELECTRONICS, 2024, 13 (02)
  • [50] Pain recognition and pain empathy from a human-centered AI perspective
    Cao, Siqi
    Fu, Di
    Yang, Xu
    Wermter, Stefan
    Liu, Xun
    Wu, Haiyan
    ISCIENCE, 2024, 27 (08)