Communication-efficient and Utility-Aware Adaptive Gaussian Differential Privacy for Personalized Federated Learning

被引:0
|
作者
Li M. [1 ]
Xiao D. [1 ]
Chen L.-J. [1 ]
机构
[1] College of Computer Science, Chongqing University, Chongqing
来源
关键词
adaptive Gaussian differential privacy; dynamic hierarchical compression; high-efficient communication; personalized federated learning; privacy-utility trade-off; private computing;
D O I
10.11897/SP.J.1016.2024.00924
中图分类号
学科分类号
摘要
In recent years, there has been an increasing focus on the privacy protection in the field of federated learning (FL). This widespread attention is mainly due to the fact that communication parameters (or gradients) during the process of collaborative learning among the central server and various participants can cause the significant risk of the privacy leakage. In other words, the communication process in the FL system poses a potential threat of exposing the sensitive data belonging to local participants, which has raised heightened concerns among researchers and practitioners. Furthermore, in addition to the challenge of the privacy protection in FL, a series of other unavoidable factors such as the frequent gradients exchange, the heterogeneous data distribution among local participants, and limited resources available on the local hardware need to be simultaneously taken into consideration. These factors obviously add difficulties to the challenge of the privacy protection in FL. In order to effectively address four critical issues of data privacy, model utility, communication efficiency, and non-independently and identically distributed data among local participants in a unified manner, this paper proposes a novel Communication-efficient and Utility-aware Adaptive Gaussian Differential Privacy for Personalized FL method, called CUAG-PFL. Specifically, a dynamic layer-compression scheme for model gradients in the FL system is proposed. This scheme aims to improve the communication efficiency as much as possible and reduce the loss of the model utility caused by compression and reconstruction through dynamically customizing the compression rate for each layer of communication gradients, and then constructing the corresponding deterministic binary measurement matrix based on the compression rate. This designed deterministic binary measurement matrix can effectively remove the redundant information of model gradients that needs to be uploaded to the central server. Subsequently, the adaptive Gaussian differential privacy operation is performed on compressed model gradients of local participants. This operation involves optimizing the main privacy-related parameters such as the clipping threshold, the sensitivity, and the noise scale. By optimizing these parameters at the same time, this operation ensures that the privacy of the local data is preserved, while allowing each model of the corresponding local participant to have the satisfactory performance. In addition, the rigorous privacy analysis of the proposed CUAG-PFL is presented in this paper. In order to validate the superiority of the proposed CUAG-PFL in four critical aspects of data privacy, model utility, communication efficiency, and personalization, a large number of experimental simulations, comparisons, and analyses are conducted on two classic real-world federated datasets, i.e., CIFAR-10 and CIFAR-100. All experimental results and analyses show that the proposed CUAG-PFL can simultaneously improve the privacy of local sensitive data, the communication efficiency and the model utility, as well as address the problem of non-independently and identically distributed data among local participants in the FL system. In particular, it is worth emphasizing that even when the privacy budget is only 0.92 and the amount of the upstream communication is reduced by 68.6%, the loss of the model performance caused by both the privacy protection and the communication gradients compression is just 1.66% for the proposed CUAG-PFL. © 2024 Science Press. All rights reserved.
引用
下载
收藏
页码:924 / 946
页数:22
相关论文
共 52 条
  • [21] Zheng Q, Chen S, Long Q, Su W., Federated f-differential privacy, Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), 130, pp. 2251-2259, (2021)
  • [22] Wu M, Ye D, Ding J, Guo Y, Yu R, Pan M., Incentivizing differentially private federated learning: A multidimensional contract approach, IEEE Internet of Things Journal, 8, 13, pp. 10639-10651, (2021)
  • [23] Cui L, Qu Y, Xie G, Zeng D, Li R, Shen S, Yu S., Security and privacy-enhanced federated learning for anomaly detection in IoT infrastructures, IEEE Transactions on Industrial Informatics, 18, 5, pp. 3492-3500, (2022)
  • [24] Wei K, Li J, Ding M, Ma C, Su H, Zhang B, Poor H V., User-level privacy-preserving federated learning: Analysis and performance optimization, IEEE Transactions on Mobile Computing, 21, 9, pp. 3388-3401, (2022)
  • [25] Noble M, Bellet A, Dieuleveut A., Differentially private federated learning on heterogeneous data, Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 151, pp. 10110-10145, (2022)
  • [26] Andrew G, Thakkar O, McMahan B, Ramaswamy S., Differentially private learning with adaptive clipping, Proceedings of the 35th Conference on Neural Information Processing Systems (NeurlPS), 34, pp. 17455-17466, (2021)
  • [27] Dong J, Roth A, Su W J., Gaussian differential privacy, Journal of the Royal Statistical Society Series B: Statistical Methodology, 84, 1, pp. 3-37, (2022)
  • [28] Bu Z, Dong J, Long Q, Su W., Deep learning with gaussian differential privacy, Harvard Data Science Review, 2020, 23, pp. 1-32, (2020)
  • [29] Bietti A, Wei C, Dudik M, Langford J, Wu S., Personalization improves privacy-accuracy tradeoffs in federated learn-ing, Proceedings of the 39th International Conference on Machine Learning (ICML), 162, pp. 1945-1962, (2022)
  • [30] Yang Y, Hui B, Yuan H, Gong N, Cao Y., PrivateFL: Accurate, differentially private federated learning via personalized data transformation, Proceedings of the 32nd International Conference on USENIX Security Symposium, pp. 1-17, (2023)