"Fifty Shades of Bias": Normative Ratings of Gender Bias in GPT Generated English Text

被引:0
|
作者
Hada, Rishav [1 ]
Seth, Agrima [1 ,2 ]
Diddee, Harshita [3 ]
Bali, Kalika [1 ]
机构
[1] Microsoft Res India, Bengaluru, India
[2] Univ Michigan, Sch Informat, Ann Arbor, MI 48109 USA
[3] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language serves as a powerful tool for the manifestation of societal belief systems. In doing so, it also perpetuates the prevalent biases in our society. Gender bias is one of the most pervasive biases in our society and is seen in online and offline discourses. With LLMs increasingly gaining human-like fluency in text generation, gaining a nuanced understanding of the biases these systems can generate is imperative. Prior work often treats gender bias as a binary classification task. However, acknowledging that bias must be perceived at a relative scale; we investigate the generation and consequent receptivity of manual annotators to bias of varying degrees. Specifically, we create the first dataset of GPT-generated English text with normative ratings of gender bias. Ratings were obtained using Best-Worst Scaling an efficient comparative annotation framework. Next, we systematically analyze the variation of themes of gender biases in the observed ranking and show that identity-attack is most closely related to gender bias. Finally, we show the performance of existing automated models trained on related concepts on our dataset.
引用
收藏
页码:1862 / 1876
页数:15
相关论文
共 50 条
  • [31] Female Teachers' Perceptions of Gender Bias in Pakistani English Textbooks
    Mahmood, Tariq
    Kausar, Ghazala
    ASIAN WOMEN, 2019, 35 (04) : 109 - 126
  • [32] An Experimental Study of Bias in Platform Worker Ratings: The Role of Performance Quality and Gender
    Jahanbakhsh, Farnaz
    Cranshaw, Justin
    Counts, Scott
    Lasecki, Walter S.
    Inkpen, Kori
    PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20), 2020,
  • [33] Bad Apples on Rotten Tomatoes: Critics, Crowds, and Gender Bias in Product Ratings
    Aguiar, Luis
    MARKETING SCIENCE, 2024,
  • [34] Racial and Gender Bias in Artificial Intelligence Generated Ophthalmologic Educational Material
    Lee, Gabriela Georgina
    Goodman, Deniz
    Chang, Ta Chen
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [35] A Positivity Bias in Written and Spoken English and Its Moderation by Personality and Gender
    Augustine, Adam A.
    Mehl, Matthias R.
    Larsen, Randy J.
    SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE, 2011, 2 (05) : 508 - 515
  • [36] Still in the shadow of Confucianism? Gender bias in contemporary English textbooks in Vietnam
    Vu, Mai Trang
    Pham, Thi Thanh Thuy
    PEDAGOGY CULTURE AND SOCIETY, 2023, 31 (03): : 477 - 497
  • [37] How Far Can It Go? On Intrinsic Gender Bias Mitigation for Text Classification
    Tokpo, Ewoenam Kwaku
    Delobelle, Pieter
    Berendt, Bettina
    Calders, Toon
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 3418 - 3433
  • [38] The invisible women: uncovering gender bias in AI-generated images of professionals
    Gorska, Anna M.
    Jemielniak, Dariusz
    FEMINIST MEDIA STUDIES, 2023, 23 (08) : 4370 - 4375
  • [39] VISOGENDER: A dataset for benchmarking gender bias in image-text pronoun resolution
    Hall, Siobhan Mackenzie
    Abrantes, Fernanda Goncalves
    Zhu, Hanwen
    Sodunke, Grace
    Shtedritski, Aleksandar
    Kirk, Hannah Rose
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Adventures with Anxiety: Gender bias in using a digital game for teaching vocational English
    Ahmadian, Shilan
    Brevik, Lisbeth M.
    Ohrn, Elisabet
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2024, 40 (06) : 2715 - 2734