Implicit bias in large language models: Experimental proof and implications for education

被引:0
|
作者
Warr, Melissa [1 ]
Oster, Nicole Jakubczyk [2 ]
Isaac, Roger [1 ]
机构
[1] New Mexico State Univ, POB 30001, Las Cruces, NM 88003 USA
[2] Arizona State Univ, Tempe, AZ USA
关键词
Generative AI; large language models; critical technology studies; systemic bias; systemic inequity; ACHIEVEMENT GAP; SCHOOL; IDENTITY;
D O I
10.1080/15391523.2024.2395295
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
We provide experimental evidence of implicit racial bias in a large language model (specifically ChatGPT 3.5) in the context of an educational task and discuss implications for the use of these tools in educational contexts. Specifically, we presented ChatGPT with identical student writing passages alongside various descriptions of student demographics, including race, socioeconomic status, and school type. Results indicate that when directly prompted to consider race, the model produced higher overall scores than responses to a control prompt, but scores given to student descriptors of Black and White were not significantly different. However, this result belied a subtler form of prejudice that was statistically significant when racial indicators were implied rather than explicitly stated. Additionally, our investigation uncovered subtle sequence effects that suggest the model is more likely to illustrate bias when variables change within a single chat. The evidence indicates that despite the implementation of guardrails by developers, biases are profoundly embedded in ChatGPT, reflective of both the training data and societal biases at large. While overt biases can be addressed to some extent, the more ingrained implicit biases present a greater challenge for the application of these technologies in education. It is critical to develop an understanding of the bias embedded in these models and how this bias presents itself in educational contexts before using LLMs to develop personalized learning tools.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Large Language Models and Their Implications on Medical Education
    Bair, Henry
    Norden, Justin
    [J]. ACADEMIC MEDICINE, 2023, 98 (08) : 869 - 870
  • [2] Ethical implications of implicit bias in nursing education
    Edwards-Maddox, Shermel
    Reid, Amy
    Quintana, Danielle M.
    [J]. TEACHING AND LEARNING IN NURSING, 2022, 17 (04) : 441 - 445
  • [3] A systematic review of large language models and their implications in medical education
    Lucas, Harrison C.
    Upperman, Jeffrey S.
    Robinson, Jamie R.
    [J]. MEDICAL EDUCATION, 2024,
  • [4] The Role of Large Language Models in Medical Education: Applications and Implications
    Safranek, Conrad W.
    Sidamon-Eristoff, Anne Elizabeth
    Gilson, Aidan
    Chartash, David
    [J]. JMIR MEDICAL EDUCATION, 2023, 9
  • [5] The life cycle of large language models in education: A framework for understanding sources of bias
    Lee, Jinsook
    Hicke, Yann
    Yu, Renzhe
    Brooks, Christopher
    Kizilcec, Rene F.
    [J]. BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2024,
  • [6] Experimental Evaluation of Implicit Bias Education in the College Classroom
    Hawkins, Carlee Beth
    Camp, Alexis Z.
    Schunke, Matthew P.
    [J]. TEACHING OF PSYCHOLOGY, 2022,
  • [7] Bias and Fairness in Large Language Models: A Survey
    Gallegos, Isabel O.
    Rossi, Ryan A.
    Barrow, Joe
    Tanjim, Md Mehrab
    Kim, Sungchul
    Dernoncourt, Franck
    Yu, Tong
    Zhang, Ruiyi
    Ahmed, Nesreen K.
    [J]. COMPUTATIONAL LINGUISTICS, 2024, 50 (03) : 1097 - 1179
  • [8] Gender bias and stereotypes in Large Language Models
    Kotek, Hadas
    Dockum, Rikker
    Sun, David Q.
    [J]. PROCEEDINGS OF THE ACM COLLECTIVE INTELLIGENCE CONFERENCE, CI 2023, 2023, : 12 - 24
  • [9] Understanding Implicit Bias (UIB): Experimental Evaluation of an Online Bias Education Program
    Hawkins, Carlee Beth
    Lofaro, Nicole
    Umansky, Emily
    Ratliff, Kate A.
    [J]. JOURNAL OF EXPERIMENTAL PSYCHOLOGY-APPLIED, 2023, 29 (04) : 887 - 902
  • [10] Pipelines for Social Bias Testing of Large Language Models
    Nozza, Debora
    Bianchi, Federico
    Hovy, Dirk
    [J]. PROCEEDINGS OF WORKSHOP ON CHALLENGES & PERSPECTIVES IN CREATING LARGE LANGUAGE MODELS (BIGSCIENCE EPISODE #5), 2022, : 68 - 74