A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare

被引:10
|
作者
Fehr, Jana [1 ,2 ,3 ]
Citro, Brian [4 ]
Malpani, Rohit [5 ]
Lippert, Christoph [1 ,2 ,4 ]
Madai, Vince I. [3 ,5 ]
机构
[1] Hasso Plattner Inst, Digital Hlth & Machine Learning, Potsdam, Germany
[2] Univ Potsdam, Digital Engn Fac, Potsdam, Germany
[3] Charite Univ Med Berlin, Berlin Inst Hlth BIH, QUEST Ctr Responsible Res, Berlin, Germany
[4] Icahn Sch Med Mt Sinai, Hasso Plattner Inst Digital Hlth Mt Sinai, New York, NY USA
[5] Birmingham City Univ, Fac Comp Engn & Built Environm, Sch Comp & Digital Technol, Birmingham, England
来源
关键词
medical AI; AI ethics; transparency; medical device regulation; trustworthy AI; MEDICAL DEVICES; BIAS;
D O I
10.3389/fdgth.2024.1267290
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was "unavailable", "partially available," or "fully available." The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Towards Trustworthy Artificial Intelligence in Healthcare
    Leung, Carson K.
    Madill, Evan W. R.
    Souza, Joglas
    Zhang, Christine Y.
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 626 - 632
  • [2] Is ChatGPT ready to change mental healthcare? Challenges and considerations: a reality-check
    Pandya, Apurvakumar
    Lodha, Pragya
    Ganatra, Amit
    FRONTIERS IN HUMAN DYNAMICS, 2024, 5
  • [3] Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses
    Qayyum, Adnan
    Butt, Muhammad Atif
    Ali, Hassan
    Usman, Muhammad
    Halabi, Osama
    Al-Fuqaha, Ala
    Abbasi, Qammer H.
    Imran, Muhammad Ali
    Qadir, Junaid
    ACM COMPUTING SURVEYS, 2024, 56 (07) : 1 - 38
  • [4] FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare
    Lekadir, Karim
    Frangi, Alejandro F.
    Porras, Antonio R.
    Glocker, Ben
    Cintas, Celia
    Langlotz, Curtis P.
    Weicken, Eva
    Asselbergs, Folkert W.
    Prior, Fred
    Collins, Gary S.
    Kaissis, Georgios
    Tsakou, Gianna
    Buvat, Irene
    Kalpathy-Cramer, Jayashree
    Mongan, John
    Schnabel, Julia A.
    Kushibar, Kaisar
    Riklund, Katrine
    Marias, Kostas
    Amugongo, Lameck M.
    Fromont, Lauren A.
    Maier-Hein, Lena
    Cerda-Alberich, Leonor
    Marti-Bonmati, Luis
    Cardoso, M. Jorge
    Bobowicz, Maciej
    Shabani, Mahsa
    Tsiknakis, Manolis
    Zuluaga, Maria A.
    Fritzsche, Marie-Christine
    Camacho, Marina
    Linguraru, Marius George
    Wenzel, Markus
    De Bruijne, Marleen
    Tolsgaard, Martin G.
    Goisauf, Melanie
    Abadia, Monica Cano
    Papanikolaou, Nikolaos
    Lazrak, Noussair
    Pujol, Oriol
    Osuala, Richard
    Napel, Sandy
    Colantonio, Sara
    Joshi, Smriti
    Klein, Stefan
    Ausso, Susanna
    Rogers, Wendy A.
    Salahuddin, Zohaib
    Starmans, Martijn P. A.
    BMJ-BRITISH MEDICAL JOURNAL, 2025, 388
  • [5] An Overview for Trustworthy and Explainable Artificial Intelligence in Healthcare
    Arslanoglu, Kubra (karslanoglu@firat.edu.tr), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [6] Artificial Intelligence (AI) in Healthcare
    Besch, Christian
    ZEITSCHRIFT FUR PALLIATIVMEDIZIN, 2020, 21 (04): : 146 - 147
  • [8] Cultivating Patient-Centered Healthcare Artificial Intelligence Transparency: Considerations for AI Documentation
    Stroud, Austin M.
    Miller, Jennifer E.
    Barry, Barbara A.
    AMERICAN JOURNAL OF BIOETHICS, 2025, 25 (03): : 129 - 131
  • [9] Requirements for Trustworthy Artificial Intelligence and its Application in Healthcare
    Kim, Myeongju
    Sohn, Hyoju
    Choi, Sookyung
    Kim, Sejoong
    HEALTHCARE INFORMATICS RESEARCH, 2023, 29 (04) : 315 - 322
  • [10] An Explainable AI Solution: Exploring Extended Reality as a Way to Make Artificial Intelligence More Transparent and Trustworthy
    Wheeler, Richard
    Carroll, Fiona
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON CYBERSECURITY, SITUATIONAL AWARENESS AND SOCIAL MEDIA, CYBER SCIENCE 2022, 2023, : 255 - 276