Balancing Learning Model Privacy, Fairness, and Accuracy With Early Stopping Criteria

被引:12
|
作者
Zhang, Tao [1 ]
Zhu, Tianqing [1 ]
Gao, Kun [1 ]
Zhou, Wanlei [2 ]
Yu, Philip S. [3 ]
机构
[1] Univ Technol Sydney, Sch Comp Sci, Ctr Cyber Secur & Privacy, Sydney, NSW 2007, Australia
[2] City Univ Macau, Inst Data Sci, Macau, Peoples R China
[3] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
澳大利亚研究理事会;
关键词
Training; Privacy; Deep learning; Costs; Analytical models; Stability criteria; Stochastic processes; differential privacy (DP); early stopping criteria; machine learning fairness; stochastic gradient descent;
D O I
10.1109/TNNLS.2021.3129592
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep learning models mature, one of the most prescient questions we face is: what is the ideal tradeoff between accuracy, fairness, and privacy (AFP)? Unfortunately, both the privacy and the fairness of a model come at the cost of its accuracy. Hence, an efficient and effective means of fine-tuning the balance between this trinity of needs is critical. Motivated by some curious observations in privacy-accuracy tradeoffs with differentially private stochastic gradient descent (DP-SGD), where fair models sometimes result, we conjecture that fairness might be better managed as an indirect byproduct of this process. Hence, we conduct a series of analyses, both theoretical and empirical, on the impacts of implementing DP-SGD in deep neural network models through gradient clipping and noise addition. The results show that, in deep learning, the number of training epochs is central to striking a balance between AFP because DP-SGD makes the training less stable, providing the possibility of model updates at a low discrimination level without much loss in accuracy. Based on this observation, we designed two different early stopping criteria to help analysts choose the optimal epoch at which to stop training a model so as to achieve their ideal tradeoff. Extensive experiments show that our methods can achieve an ideal balance between AFP.
引用
收藏
页码:5557 / 5569
页数:13
相关论文
共 50 条
  • [1] Privacy, accuracy, and model fairness trade-offs in federated learning
    Gu, Xiuting
    Tianqing, Zhu
    Li, Jie
    Zhang, Tao
    Ren, Wei
    Choo, Kim-Kwang Raymond
    COMPUTERS & SECURITY, 2022, 122
  • [2] Balancing Between Accuracy and Fairness for Interactive Recommendation with Reinforcement Learning
    Liu, Weiwen
    Liu, Feng
    Tang, Ruiming
    Liao, Ben
    Chen, Guangyong
    Heng, Pheng Ann
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2020, PT I, 2020, 12084 : 155 - 167
  • [3] The Impact of Differential Privacy on Model Fairness in Federated Learning
    Gu, Xiuting
    Zhu, Tianqing
    Li, Jie
    Zhang, Tao
    Ren, Wei
    NETWORK AND SYSTEM SECURITY, NSS 2020, 2020, 12570 : 419 - 430
  • [4] Balancing Utility and Fairness against Privacy in Medical Data
    Chester, Andrew
    Koh, Yun Sing
    Wicker, Jorg
    Sun, Quan
    Lee, Junjae
    2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 1226 - 1233
  • [5] A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models
    Nagpal, Rashmi
    Khan, Ariba
    Borkar, Mihir
    Gupta, Amar
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (03): : 2130 - 2148
  • [6] Balancing privacy versus accuracy in research protocols
    Goroff, Daniel L.
    SCIENCE, 2015, 347 (6221) : 479 - 480
  • [7] Fairness and privacy preserving in federated learning: A survey
    Rafi, Taki Hasan
    Noor, Faiza Anan
    Hussain, Tahmid
    Chae, Dong-Kyu
    INFORMATION FUSION, 2024, 105
  • [8] Privacy and Fairness in Federated Learning: On the Perspective of Tradeoff
    Chen, Huiqiang
    Zhu, Tianqing
    Zhang, Tao
    Zhou, Wanlei
    Yu, Philip S.
    ACM COMPUTING SURVEYS, 2024, 56 (02)
  • [9] On Learning Fairness and Accuracy on Multiple Subgroups
    Shui, Changjian
    Xu, Gezheng
    Chen, Qi
    Li, Jiaqi
    Ling, Charles X.
    Arbel, Tal
    Wang, Boyu
    Gagne, Christian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [10] Fairness and accuracy in horizontal federated learning
    Huang, Wei
    Li, Tianrui
    Wang, Dexian
    Du, Shengdong
    Zhang, Junbo
    Huang, Tianqiang
    INFORMATION SCIENCES, 2022, 589 (170-185) : 170 - 185