Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models

被引:0
|
作者
Cabello, Laura [1 ]
Bugliarello, Emanuele [1 ]
Brandl, Stephanie [1 ]
Elliott, Desmond [1 ]
机构
[1] Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pretrained machine learning models are known to perpetuate and even amplify existing biases in data, which can result in unfair outcomes that ultimately impact user experience. Therefore, it is crucial to understand the mechanisms behind those prejudicial biases to ensure that model performance does not result in discriminatory behaviour toward certain groups or populations. In this work, we define gender bias as our case study. We quantify bias amplification in pretraining and after fine-tuning on three families of vision-and-language models. We investigate the connection, if any, between the two learning stages, and evaluate how bias amplification reflects on model performance. Overall, we find that bias amplification in pretraining and after fine-tuning are independent. We then examine the effect of continued pretraining on gender-neutral data, finding that this reduces group disparities, i.e., promotes fairness, on VQAv2 and retrieval tasks without significantly compromising task performance.
引用
收藏
页码:8465 / 8483
页数:19
相关论文
共 50 条
  • [41] Making a Case for Studying Gender-Neutral Pronouns in Speech-Language Pathology
    Shotwell, Sarah
    Sheng, Li
    LANGUAGE SPEECH AND HEARING SERVICES IN SCHOOLS, 2021, 52 (04) : 1141 - 1145
  • [42] Sexism and Attitudes Toward Gender-Neutral Language The Case of English, French, and German
    Sarrasin, Oriane
    Gabriel, Ute
    Gygax, Pascal
    SWISS JOURNAL OF PSYCHOLOGY, 2012, 71 (03): : 113 - 124
  • [43] Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models
    Poppi, Samuele
    Poppi, Tobia
    Cocchi, Federico
    Cornia, Marcella
    Baraldi, Lorenzo
    Cucchiara, Rita
    COMPUTER VISION - ECCV 2024, PT LIII, 2025, 15111 : 340 - 356
  • [44] In reply: Promoting inclusion of transgender and nonbinary people in the Journal by using gender-neutral language
    Schwarz, Stephan K. W.
    CANADIAN JOURNAL OF ANESTHESIA-JOURNAL CANADIEN D ANESTHESIE, 2023, 70 (06): : 1092 - 1093
  • [45] Female Empowerment and the Politics of Language: Evidence Using Gender-Neutral Amendments to Subnational Constitutions
    Newman, Benjamin J.
    DeMora, Stephanie L.
    Reny, Tyler T.
    BRITISH JOURNAL OF POLITICAL SCIENCE, 2021, 51 (04) : 1761 - 1772
  • [46] The re-production process of gender bias: a case of ICT professors through recruitment in a gender-neutral country
    Tiainen, Tarja
    Berki, Eleni
    STUDIES IN HIGHER EDUCATION, 2019, 44 (01) : 170 - 184
  • [47] In reply: Promoting inclusion of transgender and nonbinary people in the Journal by using gender-neutral language
    Stephan K. W. Schwarz
    Canadian Journal of Anesthesia/Journal canadien d'anesthésie, 2023, 70 : 1092 - 1093
  • [48] Logic-Based Inference With Phrase Abduction Using Vision-and-Language Models
    Tomihari, Akiyoshi
    Yanaka, Hitomi
    IEEE ACCESS, 2023, 11 : 45645 - 45656
  • [49] Integration of Gender-Neutral Language in Birth Certificate and Social Security Worksheets on a Postpartum Unit
    Benchimol, K. C.
    Wasik, Monika
    Scalise, Laura F.
    JOGNN-JOURNAL OF OBSTETRIC GYNECOLOGIC AND NEONATAL NURSING, 2018, 47 (03): : S17 - S17
  • [50] Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias
    Wolfe, Robert
    Yang, Yiwei
    Howe, Bill
    Caliskan, Aylin
    PROCEEDINGS OF THE 6TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2023, 2023, : 1174 - 1185