Costs and Benefits of Fair Representation Learning

被引:21
|
作者
McNamara, Daniel [1 ,2 ]
Ong, Cheng Soon [1 ,2 ]
Williamson, Robert C. [1 ,2 ]
机构
[1] Australian Natl Univ, Canberra, ACT, Australia
[2] CSIRO Data6l, Canberra, ACT, Australia
关键词
fairness; representation learning; machine learning;
D O I
10.1145/3306618.3317964
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning algorithms are increasingly used to make or support important decisions about people's lives. This has led to interest in the problem of fair classification, which involves learning to make decisions that are non-discriminatory with respect to a sensitive variable such as race or gender. Several methods have been proposed to solve this problem, including fair representation learning, which cleans the input data used by the algorithm to remove information about the sensitive variable. We show that using fair representation learning as an intermediate step in fair classification incurs a cost compared to directly solving the problem, which we refer to as the cost of mistrust. We show that fair representation learning in fact addresses a different problem, which is of interest when the data user is not trusted to access the sensitive variable. We quantify the benefits of fair representation learning, by showing that any subsequent use of the cleaned data will not be too unfair. The benefits we identify result from restricting the decisions of adversarial data users, while the costs are due to applying those same restrictions to other data users.
引用
收藏
页码:263 / 270
页数:8
相关论文
共 50 条
  • [31] Advancing Graph Counterfactual Fairness Through Fair Representation Learning
    Wang, Zichong
    Chu, Zhibo
    Blanco, Ronald
    Chen, Zhong
    Chen, Shu-Ching
    Zhang, Wenbin
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT VII, ECML PKDD 2024, 2024, 14947 : 40 - 58
  • [32] FairMILE: Towards an Efficient Framework for Fair Graph Representation Learning
    He, Yuntian
    Gurukar, Saket
    Parthasarathy, Srinivasan
    PROCEEDINGS OF 2023 ACM CONFERENCE ON EQUITY AND ACCESS IN ALGORITHMS, MECHANISMS, AND OPTIMIZATION, EAAMO 2023, 2023,
  • [33] FairSwiRL: fair semi-supervised classification with representation learning
    Shuyi Yang
    Mattia Cerrato
    Dino Ienco
    Ruggero G. Pensa
    Roberto Esposito
    Machine Learning, 2023, 112 : 3051 - 3076
  • [34] Self-Supervised Fair Representation Learning without Demographics
    Chai, Junyi
    Wang, Xiaoqian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [35] Towards a Unified Framework for Fair and Stable Graph Representation Learning
    Agarwal, Chirag
    Lakkaraju, Himabindu
    Zitnik, Marinka
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 161, 2021, 161 : 2114 - 2124
  • [36] Adaptation reveals dissociable costs and benefits of perceptual learning
    McGovern, D. P.
    Roach, N. W.
    Webb, B. S.
    PERCEPTION, 2010, 39 : 38 - 39
  • [37] Fair graph representation learning: Empowering NIFTY via Biased Edge Dropout and Fair Attribute Preprocessing
    Franco, Danilo
    D'Amato, Vincenzo Stefano
    Pasa, Luca
    Navarin, Nicolo
    Oneto, Luca
    NEUROCOMPUTING, 2024, 563
  • [38] Achieving Fairness through Separability: A Unified Framework for Fair Representation Learning
    Jang, Taeuk
    Gao, Hongchang
    Shi, Pengyi
    Wang, Xiaoqian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [39] Multi-view Graph Neural Network for Fair Representation Learning
    Zhang, Guixian
    Yuan, Guan
    Cheng, Debo
    He, Ludan
    Bing, Rui
    Li, Jiuyong
    Zhang, Shichao
    WEB AND BIG DATA, APWEB-WAIM 2024, PT III, 2024, 14963 : 208 - 223
  • [40] Towards Fair Representation Learning in Knowledge Graph with Stable Adversarial Debiasing
    Wang, Yihe
    Khalili, Mohammad Mahdi
    Zhang, Xiang
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 901 - 909