How to be fair? A study of label and selection bias

被引:2
|
作者
Favier, Marco [1 ]
Calders, Toon [1 ]
Pinxteren, Sam [1 ]
Meyer, Jonathan [1 ]
机构
[1] Univ Antwerp, Antwerp, Belgium
关键词
Algorithmic fairness; Ethical AI; Classification; Fairness-accuracy trade-off;
D O I
10.1007/s10994-023-06401-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is widely accepted that biased data leads to biased and thus potentially unfair models. Therefore, several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques whose aim is to learn models that are fair by design. Despite the myriad of mitigation techniques developed in the past decade, however, it is still poorly understood under what circumstances which methods work. Recently, Wick et al. showed, with experiments on synthetic data, that there exist situations in which bias mitigation techniques lead to more accurate models when measured on unbiased data. Nevertheless, in the absence of a thorough mathematical analysis, it remains unclear which techniques are effective under what circumstances. We propose to address this problem by establishing relationships between the type of bias and the effectiveness of a mitigation technique, where we categorize the mitigation techniques by the bias measure they optimize. In this paper we illustrate this principle for label and selection bias on the one hand, and demographic parity and "We're All Equal" on the other hand. Our theoretical analysis allows to explain the results of Wick et al. and we also show that there are situations where minimizing fairness measures does not result in the fairest possible distribution.
引用
收藏
页码:5081 / 5104
页数:24
相关论文
共 50 条
  • [1] How to be fair? A study of label and selection bias
    Marco Favier
    Toon Calders
    Sam Pinxteren
    Jonathan Meyer
    Machine Learning, 2023, 112 : 5081 - 5104
  • [2] Fair and Robust Classification Under Sample Selection Bias
    Du, Wei
    Wu, Xintao
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2999 - 3003
  • [3] Selection bias in a study on how women experienced induced abortion
    Söderberg, H
    Andersson, C
    Janzon, L
    Sjöberg, NO
    EUROPEAN JOURNAL OF OBSTETRICS GYNECOLOGY AND REPRODUCTIVE BIOLOGY, 1998, 77 (01): : 67 - 70
  • [4] Fair Transition Loss: From label noise robustness to bias mitigation
    Canalli, Ygor
    Braida, Filipe
    Alvim, Leandro
    Zimbrao, Geraldo
    KNOWLEDGE-BASED SYSTEMS, 2024, 294
  • [5] HOW TO LABEL PRODUCTS UNDER FAIR PACKAGING AND LABELING ACT
    SPARRE, FD
    FOOD DRUG COSMETIC LAW JOURNAL, 1969, 24 (06): : 278 - 285
  • [6] Fair bias
    Segal, Uzi
    ECONOMICS AND PHILOSOPHY, 2006, 22 (02) : 213 - 229
  • [7] Debiased Graph Neural Networks With Agnostic Label Selection Bias
    Fan, Shaohua
    Wang, Xiao
    Shi, Chuan
    Kuang, Kun
    Liu, Nian
    Wang, Bai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4411 - 4422
  • [8] SELECTION BIAS IN FIBROSITIS STUDY
    WOLFE, F
    ARTHRITIS AND RHEUMATISM, 1982, 25 (11): : 1390 - 1390
  • [9] How fair is the selection of school principals in the Greek educational context?
    Polymeropoulou, Vasiliki
    Sorkos, Georgios
    EDUCATIONAL MANAGEMENT ADMINISTRATION & LEADERSHIP, 2024, 52 (02) : 342 - 358
  • [10] How to investigate and adjust for selection bias in cohort studies
    Nohr, Ellen A.
    Liew, Zeyan
    ACTA OBSTETRICIA ET GYNECOLOGICA SCANDINAVICA, 2018, 97 (04) : 407 - 416