Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

被引:0
|
作者
Cabello, Laura [1 ]
Bugliarello, Emanuele [1 ]
Brandl, Stephanie [1 ]
Elliott, Desmond [1 ]
机构
[1] Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.
引用
收藏
页码:8465 / 8483
页数:19
相关论文
共 50 条
  • [31] Introducing a gender-neutral pronoun in a natural gender language: the influence of time on attitudes and behavior
    Senden, Marie Gustafsson
    Back, Emma A.
    Lindqvist, Anna
    FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [32] Language and culture wars The far right's struggle against gender-neutral language
    Erdocia, Iker
    JOURNAL OF LANGUAGE AND POLITICS, 2022, 21 (06) : 847 - 866
  • [33] From sex to biology: the case for gender-neutral language in science education
    Borger, Jessica G.
    IMMUNOLOGY AND CELL BIOLOGY, 2023, 101 (08): : 690 - 692
  • [34] Are Gender-Neutral Pronouns Really Neutral? Testing a Male Bias in the Grammatical Genderless Languages Turkish and Finnish
    Renstrom, Emma A.
    Lindqvist, Anna
    Akbas, Gulcin
    Hekanaho, Laura
    Senden, Marie Gustafsson
    JOURNAL OF LANGUAGE AND SOCIAL PSYCHOLOGY, 2023, 42 (04) : 476 - 487
  • [35] MarIA and BETO are sexist: evaluating gender bias in large language models for Spanish
    Garrido-Munoz, Ismael
    Martinez-Santiago, Fernando
    Montejo-Raez, Arturo
    LANGUAGE RESOURCES AND EVALUATION, 2024, 58 (04) : 1387 - 1417
  • [36] Evaluating the Fairness of Discriminative Foundation Models in Computer Vision
    Ali, Junaid
    Kleindessner, Matthaus
    Wenzel, Florian
    Budhathoki, Kailash
    Cevher, Volkan
    Russell, Chris
    PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023, 2023, : 809 - 833
  • [37] CAN LEGAL LANGUAGE BE GENDER-NEUTRAL? SOME THOUGHTS ON (NON)-SEXIST LANGUAGE IN ENGLISH AND SPANISH
    Pano, Ana
    Turci, Monica
    REVISTA GENERAL DE DERECHO PUBLICO COMPARADO, 2008, (02):
  • [38] On the Independence of Association Bias and Empirical Fairness in Language Models
    Cabello, Laura
    Jorgensen, Anna Katrine
    Sogaard, Anders
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 370 - 378
  • [39] Promoting inclusion of transgender and nonbinary people in the Journal by using gender-neutral language
    MacCormick, Hilary
    Burchell, Drew
    CANADIAN JOURNAL OF ANESTHESIA-JOURNAL CANADIEN D ANESTHESIE, 2023, 70 (06): : 1090 - 1091
  • [40] Promoting inclusion of transgender and nonbinary people in the Journal by using gender-neutral language
    Hilary MacCormick
    Drew Burchell
    Canadian Journal of Anesthesia/Journal canadien d'anesthésie, 2023, 70 : 1090 - 1091