"Fifty Shades of Bias": Normative Ratings of Gender Bias in GPT Generated English Text

被引:0
|
作者
Hada, Rishav [1 ]
Seth, Agrima [1 ,2 ]
Diddee, Harshita [3 ]
Bali, Kalika [1 ]
机构
[1] Microsoft Res India, Bengaluru, India
[2] Univ Michigan, Sch Informat, Ann Arbor, MI 48109 USA
[3] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best-Worst Scaling an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.
引用
收藏
页码:1862 / 1876
页数:15
相关论文
共 50 条
  • [21] Nasality of infant vocalizations determines gender bias in adult favorability ratings
    Bloom, K
    Moore-Schoenmakers, K
    Masataka, N
    JOURNAL OF NONVERBAL BEHAVIOR, 1999, 23 (03) : 219 - 236
  • [22] The role of gender bias in patient ratings of minimally invasive gynecologic surgeons
    Urbina, P.
    Yang, L.
    Swartz, S.
    Emeka, A.
    AMERICAN JOURNAL OF OBSTETRICS AND GYNECOLOGY, 2024, 230 (04) : S1157 - S1157
  • [23] Gender bias is more exaggerated in online images than in text
    Hofstra, Bas
    Mulders, Anne Maaike
    NATURE, 2024, 626 (8001) : 960 - 961
  • [24] Detecting gender bias in Arabic text through word embeddings
    Mourad, Aya
    Abu Salem, Fatima K.
    Elbassuoni, Shady
    PLOS ONE, 2025, 20 (03):
  • [25] Gender Bias in the Pesantren Literature (A Case Study on Uqudulujjain Text)
    Abdullah, Muh
    ADVANCED SCIENCE LETTERS, 2017, 23 (10) : 9968 - 9971
  • [26] Easy Adaptation to Mitigate Gender Bias in Multilingual Text Classification
    Huang, Xiaolei
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 717 - 723
  • [27] SEX AND GENDER BIAS IN ANATOMY AND PHYSICAL DIAGNOSIS TEXT ILLUSTRATIONS
    MENDELSOHN, KD
    NIEMAN, LZ
    ISAACS, K
    LEE, S
    LEVISON, SP
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 1994, 272 (16): : 1267 - 1270
  • [28] Examining and mitigating gender bias in text emotion detection task
    Odbal
    Zhang, Guanhong
    Ananiadou, Sophia
    NEUROCOMPUTING, 2022, 493 : 422 - 434
  • [29] Proposed Taxonomy for Gender Bias in Text; A Filtering Methodology for the Gender Generalization Subtype
    Hitti, Yasmeen
    Jang, Eunbee
    Moreno, Ines
    Pelletier, Carolyne
    GENDER BIAS IN NATURAL LANGUAGE PROCESSING (GEBNLP 2019), 2019, : 8 - 17
  • [30] Evaluating Gender Bias in Hindi-English Machine Translation
    Gupta, Gauri
    Ramesh, Krithika
    Singh, Sanjay
    GEBNLP 2021: THE 3RD WORKSHOP ON GENDER BIAS IN NATURAL LANGUAGE PROCESSING, 2021, : 16 - 23