Learning fair models without sensitive attributes: A generative approach

被引:0
|
作者
Zhu, Huaisheng [1 ]
Dai, Enyan [1 ]
Liu, Hui [2 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, Coll Informat Sci & Technol, University Pk, PA 16802 USA
[2] Michigan State Univ, Dept Comp Sci & Engn, E Lansing, MI 48824 USA
基金
美国国家科学基金会;
关键词
Fairness; Generative model; NETWORKS;
D O I
10.1016/j.neucom.2023.126841
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most existing fair classifiers rely on sensitive attributes to achieve fairness. However, for many scenarios, we cannot obtain sensitive attributes due to privacy and legal issues. The lack of sensitive attributes challenges many existing fair classifiers. Though we lack sensitive attributes, for many applications, there usually exists features/information of various formats that are relevant to sensitive attributes. For example, a person's purchase history can reflect his/her race, which would help for learning fair classifiers on race. However, the work on exploring relevant features for learning fair models without sensitive attributes is rather limited. Therefore, in this paper, we study a novel problem of learning fair models without sensitive attributes by exploring relevant features. We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data with relevant features in various formats and utilize the estimated sensitive attribute information to learn fair models. Experimental results on real-world datasets show the effectiveness of our framework in terms of both accuracy and fairness. Our source code is available at: https://github.com/huaishengzhu/FairWS.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Not So Fair: The Impact of Presumably Fair Machine Learning Models
    Jorgensen, Mackenzie
    Richert, Hannah
    Black, Elizabeth
    Criado, Natalia
    Such, Jose
    [J]. PROCEEDINGS OF THE 2023 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2023, 2023, : 297 - 311
  • [42] Learning Fair Representations for Kernel Models
    Tan, Zilong
    Yeom, Samuel
    Fredrikson, Matt
    Talwalkar, Ameet
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [43] GroupMixNorm Layer for Learning Fair Models
    Pandey, Anubha
    Rai, Aditi
    Singh, Maneet
    Bhatt, Deepak
    Bhowmik, Tanmoy
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT I, 2023, 13935 : 520 - 531
  • [44] Fairness without Sensitive Attributes via Knowledge Sharing
    Ni, Hongliang
    Han, Lei
    Chen, Tong
    Sadiq, Shazia
    Demartini, Gianluca
    [J]. PROCEEDINGS OF THE 2024 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, ACM FACCT 2024, 2024, : 1897 - 1906
  • [45] On Learning of Choice Models with Interactive Attributes
    Aggarwal, Manish
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2016, 28 (10) : 2697 - 2708
  • [46] Counterfactual image generation by disentangling data attributes with deep generative models
    Lim, Jieon
    Joo, Weonyoung
    [J]. COMMUNICATIONS FOR STATISTICAL APPLICATIONS AND METHODS, 2023, 30 (06) : 589 - 603
  • [47] Towards Fair Cross-Domain Adaptation via Generative Learning
    Wang, Tongxin
    Ding, Zhengming
    Shao, Wei
    Tang, Haixu
    Huang, Kun
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 454 - 463
  • [48] Risk-Sensitive Generative Adversarial Imitation Learning
    Lacotte, Jonathan
    Ghavamzadeh, Mohammad
    Chow, Yinlam
    Pavone, Marco
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [49] Identifiability of deep generative models without auxiliary information
    Kivva, Bohdan
    Rajendran, Goutham
    Ravikumar, Pradeep
    Aragam, Bryon
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [50] From Generative Models to Generative Passages: A Computational Approach to (Neuro) Phenomenology
    Ramstead, Maxwell J. D.
    Seth, Anil K.
    Hesp, Casper
    Sandved-Smith, Lars
    Mago, Jonas
    Lifshitz, Michael
    Pagnoni, Giuseppe
    Smith, Ryan
    Dumas, Guillaume
    Lutz, Antoine
    Friston, Karl
    Constant, Axel
    [J]. REVIEW OF PHILOSOPHY AND PSYCHOLOGY, 2022, 13 (04) : 829 - 857