A toolbox for surfacing health equity harms and biases in large language models

被引:0
|
作者
Pfohl, Stephen R. [1 ]
Cole-Lewis, Heather [1 ]
Sayres, Rory [1 ]
Neal, Darlene [1 ]
Asiedu, Mercy [1 ]
Dieng, Awa [2 ]
Tomasev, Nenad [2 ]
Rashid, Qazi Mamunur [1 ]
Azizi, Shekoofeh [2 ]
Rostamzadeh, Negar [1 ]
McCoy, Liam G. [3 ]
Celi, Leo Anthony [4 ,5 ,6 ]
Liu, Yun [1 ]
Schaekermann, Mike [1 ]
Walton, Alanna [2 ]
Parrish, Alicia [2 ]
Nagpal, Chirag [1 ]
Singh, Preeti [1 ]
Dewitt, Akeiylah [1 ]
Mansfield, Philip [2 ]
Prakash, Sushant [1 ]
Heller, Katherine [1 ]
Karthikesalingam, Alan [1 ]
Semturs, Christopher [1 ]
Barral, Joelle [2 ]
Corrado, Greg [1 ]
Matias, Yossi [1 ]
Smith-Loud, Jamila [1 ]
Horn, Ivor [1 ]
Singhal, Karan [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Google DeepMind, Mountain View, CA USA
[3] Univ Alberta, Edmonton, AB, Canada
[4] MIT, Lab Computat Physiol, Cambridge, MA USA
[5] Beth Israel Deaconess Med Ctr, Div Pulm Crit Care & Sleep Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Biostat, Boston, MA USA
基金
美国国家科学基金会;
关键词
D O I
10.1038/s41591-024-03258-2
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare. Identifying a complex panel of bias dimensions to be evaluated, a framework is proposed to assess how prone large language models are to biased reasoning, with possible consequences on equity-related harms, and is applied to a large-scale and diverse user survey on Med-PaLM 2.
引用
收藏
页码:3590 / 3600
页数:30
相关论文
共 50 条
  • [1] Biases in Large Language Models: Origins, Inventory, and Discussion
    Navigli, Roberto
    Conia, Simone
    Ross, Bjorn
    ACM JOURNAL OF DATA AND INFORMATION QUALITY, 2023, 15 (02):
  • [2] (Ir)rationality and cognitive biases in large language models
    Macmillan-Scott, Olivia
    Musolesi, Mirco
    ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (06):
  • [3] Leveraging large language models to foster equity in healthcare
    Rodriguez, Jorge A.
    Alsentzer, Emily
    Bates, David W.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09)
  • [4] Performance and biases of Large Language Models in public opinion simulation
    Qu, Yao
    Wang, Jue
    HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS, 2024, 11 (01):
  • [5] Artificial Intelligence in mental health and the biases of language based models
    Straw, Isabel
    Callison-Burch, Chris
    PLOS ONE, 2020, 15 (12):
  • [6] In-Context Impersonation Reveals Large Language Models' Strengths and Biases
    Salewski, Leonard
    Alaniz, Stephan
    Rio-Torto, Isabel
    Schulz, Eric
    Akata, Zeynep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Capturing Failures of Large Language Models via Human Cognitive Biases
    Jones, Erik
    Steinhardt, Jacob
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Equity Issues Derived from Use of Large Language Models in Education
    Bispo, Esdras L., Jr.
    dos Santos, Simone Cristiane
    De Matos, Marcus V. A. B.
    NEW MEDIA PEDAGOGY: RESEARCH TRENDS, METHODOLOGICAL CHALLENGES, AND SUCCESSFUL IMPLEMENTATIONS, NMP 2023, 2024, 2130 : 425 - 440
  • [9] Adding competency models to the pay equity toolbox COMMENT
    Popp, Eric
    Allen, Kristin S.
    Gutierrez, Sara
    INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY-PERSPECTIVES ON SCIENCE AND PRACTICE, 2022, 15 (01): : 70 - 72
  • [10] Precision Health in the Age of Large Language Models
    Poon, Hoifung
    Naumann, Tristan
    Zhang, Sheng
    Hernandez, Javier Gonzalez
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5825 - 5826