Fairea: A Model Behaviour Mutation Approach to Benchmarking Bias Mitigation Methods

被引:41
|
作者
Hort, Max [1 ]
Zhang, Jie M. [1 ]
Sarro, Federica [1 ]
Harman, Mark [1 ]
机构
[1] UCL, London, England
关键词
Software fairness; bias mitigation; model mutation;
D O I
10.1145/3468264.3468565
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The increasingly wide uptake of Machine Learning (ML) has raised the significance of the problem of tackling bias (i.e., unfairness), making it a primary software engineering concern. In this paper, we introduce Fairea, a model behaviour mutation approach to benchmarking ML bias mitigation methods. We also report on a large-scale empirical study to test the effectiveness of 12 widely-studied bias mitigation methods. Our results reveal that, surprisingly, bias mitigation methods have a poor effectiveness in 49% of the cases. In particular, 15% of the mitigation cases have worse fairness-accuracy trade-offs than the baseline established by Fairea; 34% of the cases have a decrease in accuracy and an increase in bias. Fairea has been made publicly available for software engineers and researchers to evaluate their bias mitigation methods.
引用
收藏
页码:994 / 1006
页数:13
相关论文
共 50 条
  • [31] On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning
    Jin, Xisen
    Barbieri, Francesco
    Kennedy, Brendan
    Davani, Aida Mostafazadeh
    Neves, Leonardo
    Ren, Xiang
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3770 - 3783
  • [32] Enhancing RNA-seq bias mitigation with the Gaussian self-benchmarking framework: towards unbiased sequencing data
    Su, Qiang
    Long, Yi
    Gou, Deming
    Quan, Junmin
    Lian, Qizhou
    BMC GENOMICS, 2024, 25 (01):
  • [33] A novel approach for bias mitigation of gender classification algorithms using consistency regularization
    Krishnan, Anoop
    Rattani, Ajita
    IMAGE AND VISION COMPUTING, 2023, 137
  • [34] Correction for Model Selection Bias Using a Modified Model Averaging Approach for Supervised Learning Methods Applied to EEG Experiments
    Wouters, Kristien
    Abrahantes, Jose Cortinas
    Molenberghs, Geert
    Geys, Helena
    Ahnaou, Abdellah
    Drinkenburg, Wilhelmus H. I. M.
    Bijnens, Luc
    JOURNAL OF BIOPHARMACEUTICAL STATISTICS, 2010, 20 (04) : 768 - 786
  • [35] Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation
    Liang, Hao
    Perona, Pietro
    Balakrishnan, Guha
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4954 - 4964
  • [36] On the Mitigation of Phase Bias in SAR Interferometry Applications: A New Model Based on NDWI
    Mira, Nuno Cirne
    Catalao, Joao
    Nico, Giovanni
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 3850 - 3859
  • [37] A spatiotemporal stochastic climate model for benchmarking causal discovery methods for teleconnections
    Tibau, Xavier-Andoni
    Reimers, Christian
    Gerhardus, Andreas
    Denzler, Joachim
    Eyring, Veronika
    Runge, Jakob
    ENVIRONMENTAL DATA SCIENCE, 2022, 1
  • [38] Exploring the common molecular basis for the universal DNA mutation bias: Revival of Lowdin mutation model
    Fu, Liang-Yu
    Wang, Guang-Zhong
    Ma, Bin-Guang
    Zhang, Hong-Yu
    BIOCHEMICAL AND BIOPHYSICAL RESEARCH COMMUNICATIONS, 2011, 409 (03) : 367 - 371
  • [39] Benchmarking Optimisation Methods for Model Selection and Parameter Estimation of Nonlinear Systems
    Safari, Sina
    Monsalve, Julian Londono
    VIBRATION, 2021, 4 (03): : 648 - 665
  • [40] Social-Group-Agnostic Bias Mitigation via the Stereotype Content Model
    Omrani, Ali
    Ziabari, Alireza S.
    Yu, Charles
    Golazizian, Preni
    Kennedy, Brendan
    Atari, Mohammad
    Ji, Heng
    Dehghani, Morteza
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4123 - 4139