A Novel Approach for Differential Privacy-Preserving Federated Learning

被引:0
|
作者
Elgabli, Anis [1 ,2 ]
Mesbah, Wessam [2 ,3 ]
机构
[1] King Fahd University of Petroleum and Minerals, Industrial and Systems Engineering Department, Dhahran,31261, Saudi Arabia
[2] King Fahd University of Petroleum and Minerals, Center for Communication Systems and Sensing, Dhahran,31261, Saudi Arabia
[3] King Fahd University of Petroleum and Minerals, Electrical Engineering Department, Dhahran,31261, Saudi Arabia
关键词
Adversarial machine learning - Contrastive Learning - Differential privacy - Privacy-preserving techniques - Stochastic models - Stochastic systems;
D O I
10.1109/OJCOMS.2024.3521651
中图分类号
学科分类号
摘要
In this paper, we start with a comprehensive evaluation of the effect of adding differential privacy (DP) to federated learning (FL) approaches, focusing on methodologies employing global (stochastic) gradient descent (SGD/GD), and local SGD/GD techniques. These global and local techniques are commonly referred to as FedSGD/FedGD and FedAvg, respectively. Our analysis reveals that, as far as only one local iteration is performed by each client before transmitting to the parameter server (PS) for FedGD, both FedGD and FedAvg achieve the same accuracy/loss for the same privacy guarantees, despite requiring different perturbation noise power. Furthermore, we propose a novel DP mechanism, which is shown to ensure privacy without compromising performance. In particular, we propose the sharing of a random seed (or a specified sequence of random seeds) among collaborative clients, where each client uses this seed to introduces perturbations to its updates prior to transmission to the PS. Importantly, due to the random seed sharing, clients possess the capability to negate the noise effects and recover their original global model. This mechanism preserves privacy both at a curiousPS or at external eavesdroppers without compromising the performance of the final model at each client, thus mitigating the risk of inversion attacks aimed at retrieving (partially or fully) the clients' data. Furthermore, the importance and effect of clipping in the practical implementation of DP mechanisms, in order to upper bound the perturbation noise, is discussed. Moreover, owing to the ability to cancel noise at individual clients, our proposed approach enables the introduction of arbitrarily high perturbation levels, and hence, clipping can be totally avoided, resulting in the same performance of noise-free standard FL approaches. © 2020 IEEE.
引用
收藏
页码:466 / 476
相关论文
共 50 条
  • [1] AddShare: A Privacy-Preserving Approach for Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 299 - 309
  • [2] A Syntactic Approach for Privacy-Preserving Federated Learning
    Choudhury, Olivia
    Gkoulalas-Divanis, Aris
    Salonidis, Theodoros
    Sylla, Issa
    Park, Yoonyoung
    Hsu, Grace
    Das, Amar
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1762 - 1769
  • [3] Privacy-Preserving Robust Federated Learning with Distributed Differential Privacy
    Wang, Fayao
    He, Yuanyuan
    Guo, Yunchuan
    Li, Peizhi
    Wei, Xinyu
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 598 - 605
  • [4] PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy
    Wang, Baocang
    Chen, Yange
    Jiang, Hang
    Zhao, Zhen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15488 - 15500
  • [5] Privacy-Preserving Personalized Federated Learning
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [6] Privacy Preserving Federated Learning: A Novel Approach for Combining Differential Privacy and Homomorphic Encryption
    Aziz, Rezak
    Banerjee, Soumya
    Bouzefrane, Samia
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 162 - 177
  • [7] Frameworks for Privacy-Preserving Federated Learning
    Phong, Le Trieu
    Phuong, Tran Thi
    Wang, Lihua
    Ozawa, Seiichi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2024, E107D (01) : 2 - 12
  • [8] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366
  • [9] Adaptive privacy-preserving federated learning
    Xiaoyuan Liu
    Hongwei Li
    Guowen Xu
    Rongxing Lu
    Miao He
    Peer-to-Peer Networking and Applications, 2020, 13 : 2356 - 2366
  • [10] Privacy-preserving Techniques in Federated Learning
    Liu Y.-X.
    Chen H.
    Liu Y.-H.
    Li C.-P.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (03): : 1057 - 1092