Algorithmic Fairness Generalization under Covariate and Dependence Shifts Simultaneously

被引:0
|
作者
Zhao, Chen [1 ]
Jiang, Kai [2 ]
Wu, Xintao [3 ]
Wang, Haoliang [2 ]
Khan, Latifur [2 ]
Grant, Christan [4 ]
Chen, Feng [2 ]
机构
[1] Baylor Univ, Waco, TX 76798 USA
[2] Univ Texas Dallas, Richardson, TX 75083 USA
[3] Univ Arkansas, Fayetteville, AR 72701 USA
[4] Univ Florida, Gainesville, FL USA
来源
PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024 | 2024年
基金
美国国家科学基金会;
关键词
Fairness; Generalization; Distribution Shifts;
D O I
10.1145/3637528.3671909
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The endeavor to preserve the generalization of a fair and invariant classifier across domains, especially in the presence of distribution shifts, becomes a significant and intricate challenge in machine learning. In response to this challenge, numerous effective algorithms have been developed with a focus on addressing the problem of fairness-aware domain generalization. These algorithms are designed to navigate various types of distribution shifts, with a particular emphasis on covariate and dependence shifts. In this context, covariate shift pertains to changes in the marginal distribution of input features, while dependence shift involves alterations in the joint distribution of the label variable and sensitive attributes. In this paper, we introduce a simple but effective approach that aims to learn a fair and invariant classifier by simultaneously addressing both covariate and dependence shifts across domains. We assert the existence of an underlying transformation model can transform data from one domain to another, while preserving the semantics related to non-sensitive attributes and classes. By augmenting various synthetic data domains through the model, we learn a fair and invariant classifier in source domains. This classifier can then be generalized to unknown target domains, maintaining both model prediction and fairness concerns. Extensive empirical studies on four benchmark datasets demonstrate that our approach surpasses state-of-the-art methods. The code repository is available at https://github.com/jk- kaijiang/FDDG.
引用
收藏
页码:4419 / 4430
页数:12
相关论文
共 35 条
  • [11] CrossNorm and SelfNorm for Generalization under Distribution Shifts
    Tang, Zhiqiang
    Gao, Yunhe
    Zhu, Yi
    Zhang, Zhi
    Li, Mu
    Metaxas, Dimitris
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 52 - 61
  • [12] Input-dependent estimation of generalization error under covariate shift
    Sugiyama, Masashi
    Mueller, Klaus-Robert
    STATISTICS & RISK MODELING, 2005, 23 (04) : 249 - 279
  • [13] CONDITIONAL DEPENDENCE MODELS UNDER COVARIATE MEASUREMENT ERROR.
    Zhao, K.
    Acar, E.
    AMERICAN JOURNAL OF EPIDEMIOLOGY, 2016, 184 (01) : 80 - 80
  • [14] Delving Deep into the Generalization of Vision Transformers under Distribution Shifts
    Zhang, Chongzhi
    Zhang, Mingyuan
    Zhang, Shanghang
    Jin, Daisheng
    Zhou, Qiang
    Cai, Zhongang
    Zhao, Haiyu
    Liu, Xianglong
    Liu, Ziwei
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7267 - 7276
  • [15] Experimental Bayesian generalization error of non-regular models under covariate shift
    Yamazaki, Keisuke
    Watanabe, Sumio
    NEURAL INFORMATION PROCESSING, PART I, 2008, 4984 : 466 - 476
  • [16] GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY
    Chang, Xiangyu
    Xu, Zongben
    Zou, Bin
    Zhang, Hai
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2011, 9 (04) : 549 - 570
  • [17] Transferring Fairness under Distribution Shifts via Fair Consistency Regularization
    An, Bang
    Che, Zora
    Ding, Mucong
    Huang, Furong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [18] Generative models improve fairness of medical classifiers under distribution shifts
    Ktena, Ira
    Wiles, Olivia
    Albuquerque, Isabela
    Rebuffi, Sylvestre-Alvise
    Tanno, Ryutaro
    Roy, Abhijit Guha
    Azizi, Shekoofeh
    Belgrave, Danielle
    Kohli, Pushmeet
    Cemgil, Taylan
    Karthikesalingam, Alan
    Gowal, Sven
    NATURE MEDICINE, 2024, 30 (04) : 1166 - 1173
  • [19] Label Shift Adapter for Test-Time Adaptation under Covariate and Label Shifts
    Park, Sunghyun
    Yang, Seunghan
    Choo, Jaegul
    Yun, Sungrack
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16375 - 16385
  • [20] Judging in the Dark: How Delivery Riders Form Fairness Perceptions Under Algorithmic Management
    Xiang, Yuan
    Du, Jing
    Zheng, Xue Ni
    Long, Li Rong
    Xie, Huan Yan
    JOURNAL OF BUSINESS ETHICS, 2024,