"Fifty Shades of Bias": Normative Ratings of Gender Bias in GPT Generated English Text

被引:0
|
作者
Hada, Rishav [1 ]
Seth, Agrima [1 ,2 ]
Diddee, Harshita [3 ]
Bali, Kalika [1 ]
机构
[1] Microsoft Res India, Bengaluru, India
[2] Univ Michigan, Sch Informat, Ann Arbor, MI 48109 USA
[3] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best-Worst Scaling an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.
引用
收藏
页码:1862 / 1876
页数:15
相关论文
共 50 条
  • [41] Personal pronouns and person perception - Do paired and nonbinary pronouns evoke a normative gender bias?
    Renstrom, Emma A.
    Lindqvist, Anna
    Klysing, Amanda
    Senden, Marie Gustafsson
    BRITISH JOURNAL OF PSYCHOLOGY, 2024, 115 (02) : 253 - 274
  • [42] What's in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT
    Kaplan, Deanna M.
    Palitsky, Roman
    Alvarez, Santiago J. Arconada
    Pozzo, Nicole S.
    Greenleaf, N.
    Atkinson, Ciara A.
    Lam, Wilbur A.
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [43] Gender and ethnicity bias in medicine: a text analysis of 1.8 million critical care records
    Markowitz, David M.
    PNAS NEXUS, 2022, 1 (04):
  • [44] Gender and ethnicity bias in generative artificial intelligence text-to-image depiction of pharmacists
    Currie, Geoffrey
    John, George
    Hewis, Johnathan
    INTERNATIONAL JOURNAL OF PHARMACY PRACTICE, 2024, 32 (06) : 524 - 531
  • [45] Does Alternating Between Masculine and Feminine Pronouns Eliminate Perceived Gender Bias in Text?
    Laura Madson
    Robert M. Hessling
    Sex Roles, 1999, 41 : 559 - 575
  • [46] Gender bias in generative artificial intelligence text-to-image depiction of medical students
    Currie, Geoffrey
    Currie, Josie
    Anderson, Sam
    Hewis, Johnathan
    HEALTH EDUCATION JOURNAL, 2024, 83 (07) : 732 - 746
  • [47] Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists
    Currie, Geoffrey
    Chandra, Christina
    Kiat, Hosen
    INFORMATION, 2024, 15 (10)
  • [48] Does alternating between masculine and feminine pronouns eliminate perceived gender bias in text?
    Madson, L
    Hessling, RM
    SEX ROLES, 1999, 41 (7-8) : 559 - 575
  • [49] Gender Bias Still Plagues the Workplace: Looking at Derailment Risk and Performance With Self-Other Ratings
    Braddy, Phillip W.
    Sturm, Rachel E.
    Atwater, Leanne
    Taylor, Scott N.
    McKee, Rob Austin
    GROUP & ORGANIZATION MANAGEMENT, 2020, 45 (03) : 315 - 350
  • [50] Exploiting Biased Models to De-bias Text: A Gender-Fair Rewriting Model
    Amrhein, Chantal
    Schottmann, Florian
    Sennrich, Rico
    Laubli, Samuel
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4486 - 4506