Ensuring Fairness under Prior Probability Shifts

被引:20
|
作者
Biswas, Arpita [1 ]
Mukherjee, Suvam [2 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Microsoft Corp, Redmond, WA 98052 USA
关键词
Classification; Discrimination; Distributional shifts; Algorithmic Fairness; DISCRIMINATION;
D O I
10.1145/3461702.3462596
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Prior probability shift is a phenomenon where the training and test datasets differ structurally within population subgroups. This phenomenon can be observed in the yearly records of several real-world datasets, for example, recidivism records and medical expenditure surveys. If unaccounted for, such shifts can cause the predictions of a classifier to become unfair towards specific population subgroups. While the fairness notion called Proportional Equality (PE) accounts for such shifts, a procedure to ensure PE-fairness was unknown. In this work, we design an algorithm, called CAPE, that ensures fair classification under such shifts. We introduce a metric, called prevalence difference (PD), which CAPE attempts to minimize in order to achieve fairness under prior probability shifts. We theoretically establish that this metric exhibits several properties that are desirable for a fair classifier. We evaluate the efficacy of CAPE via a thorough empirical evaluation on synthetic datasets. We also compare the performance of CAPE with several state-of-the-art fair classifiers on real-world datasets like COMPAS (criminal risk assessment) and MEPS (medical expenditure panel survey). The results indicate that CAPE ensures a high degree of PE-fairness in its predictions, while performing well on other important metrics.
引用
收藏
页码:414 / 424
页数:11
相关论文
共 50 条
  • [1] Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously
    Zhao, Chen
    Jiang, Kai
    Wu, Xintao
    Wang, Haoliang
    Khan, Latifur
    Grant, Christan
    Chen, Feng
    PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024, 2024, : 4419 - 4430
  • [2] Softmin discrete minimax classifier for imbalanced classes and prior probability shifts
    Gilet, Cyprien
    Guyomard, Marie
    Destercke, Sebastien
    Fillatre, Lionel
    MACHINE LEARNING, 2024, 113 (02) : 605 - 645
  • [3] Ensuring Fairness in Medical Education Assessment
    Boatright, Dowin
    Edje, Louito
    Gruppen, Larry D.
    Hauer, Karen E.
    Humphrey, Holly J.
    Marcotte, Kayla
    ACADEMIC MEDICINE, 2023, 98 (8S) : S1 - S2
  • [4] Ensuring Fairness in Health Care Coverage
    Wiatrowski, William
    MONTHLY LABOR REVIEW, 2008, 131 (01) : 56 - 57
  • [5] Ensuring generalized fairness in batch classification
    Manjish Pal
    Subham Pokhriyal
    Sandipan Sikdar
    Niloy Ganguly
    Scientific Reports, 13
  • [6] ENSURING CARE, RESPECT, AND FAIRNESS FOR THE ELDERLY
    CHILDRESS, JF
    HASTINGS CENTER REPORT, 1984, 14 (05) : 27 - 31
  • [7] Ensuring generalized fairness in batch classification
    Pal, Manjish
    Pokhriyal, Subham
    Sikdar, Sandipan
    Ganguly, Niloy
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [8] Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
    An, Bang
    Che, Zora
    Ding, Mucong
    Huang, Furong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Generative models improve fairness of medical classifiers under distribution shifts
    Ktena, Ira
    Wiles, Olivia
    Albuquerque, Isabela
    Rebuffi, Sylvestre-Alvise
    Tanno, Ryutaro
    Roy, Abhijit Guha
    Azizi, Shekoofeh
    Belgrave, Danielle
    Kohli, Pushmeet
    Cemgil, Taylan
    Karthikesalingam, Alan
    Gowal, Sven
    NATURE MEDICINE, 2024, 30 (04) : 1166 - 1173
  • [10] Trading probability for fairness
    Jurdzinski, M
    Kupferman, O
    Henzinger, TA
    COMPUTER SCIENCE LOGIC, PROCEEDINGS, 2002, 2471 : 292 - 305