Sybil Attacks and Defense on Differential Privacy based Federated Learning

被引:0
|
作者
Jiang, Yupeng [1 ]
Li, Yong [2 ]
Zhou, Yipeng [1 ]
Zheng, Xi [1 ]
机构
[1] Macquarie Univ, Sydney, NSW, Australia
[2] Changchun Univ Technol, Changchun, Jilin, Peoples R China
基金
澳大利亚研究理事会;
关键词
Federated learning; differential privacy; Sybil attack;
D O I
10.1109/TRUSTCOM53373.2021.00062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In federated learning, machine learning and deep learning models are trained globally on distributed devices. The state-of-the-art privacy-preserving technique in the context of federated learning is user-level differential privacy. However, such a mechanism is vulnerable to some specific model poisoning attacks such as Sybil attacks. A malicious adversary could create multiple fake clients or collude compromised devices in Sybil attacks to mount direct model updates manipulation. Recent works on novel defense against model poisoning attacks are difficult to detect Sybil attacks when differential privacy is utilized, as it masks clients' model updates with perturbation. In this work, we implement the first Sybil attacks on differential privacy based federated learning architectures and show their impacts on model convergence. We randomly compromise some clients by manipulating different noise levels reflected by the local privacy budget epsilon of differential privacy with Laplace mechanism on the local model updates of these Sybil clients. As a result, the global model convergence rates decrease or even leads to divergence. We apply our attacks to two recent aggregation defense mechanisms, called Krum and Trimmed Mean. Our evaluation results on the MNIST and CIFAR-10 datasets show that our attacks effectively slow down the convergence of the global models. We then propose a method to keep monitoring the average loss of all participants in each round for convergence anomaly detection and defend our Sybil attacks based on the training loss reported from randomly selected sets of clients as the judging panels. Our empirical study demonstrates that our defense effectively mitigates the impact of our Sybil attacks.
引用
收藏
页码:355 / 362
页数:8
相关论文
共 50 条
  • [1] Mitigating Sybil Attacks in Federated Learning
    Samy, Ahmed E.
    Girdzijauskas, Sarunas
    [J]. INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2023, 2023, 14341 : 36 - 51
  • [2] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [3] SCA: Sybil-Based Collusion Attacks of IIoT Data Poisoning in Federated Learning
    Xiao, Xiong
    Tang, Zhuo
    Li, Chuanying
    Xiao, Bin
    Li, Kenli
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (03) : 2608 - 2618
  • [4] Privacy and Robustness in Federated Learning: Attacks and Defenses
    Lyu, Lingjuan
    Yu, Han
    Ma, Xingjun
    Chen, Chen
    Sun, Lichao
    Zhao, Jun
    Yang, Qiang
    Yu, Philip S.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8726 - 8746
  • [5] FL-PTD: A Privacy Preserving Defense Strategy Against Poisoning Attacks in Federated Learning
    Xia, Geming
    Chen, Jian
    Huang, Xinyi
    Yu, Chaodong
    Zhang, Zhong
    [J]. 2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 735 - 740
  • [6] Personalized Federated Learning With Differential Privacy
    Hu, Rui
    Guo, Yuanxiong
    Li, Hongning
    Pei, Qingqi
    Gong, Yanmin
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (10) : 9530 - 9539
  • [7] Local Differential Privacy for Federated Learning
    Arachchige, Pathum Chamikara Mahawaga
    Liu, Dongxi
    Camtepe, Seyit
    Nepal, Surya
    Grobler, Marthie
    Bertok, Peter
    Khalil, Ibrahim
    [J]. COMPUTER SECURITY - ESORICS 2022, PT I, 2022, 13554 : 195 - 216
  • [8] Federated Learning with Bayesian Differential Privacy
    Triastcyn, Aleksei
    Faltings, Boi
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2587 - 2596
  • [9] DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning
    Hossain, Md Tamjid
    Islam, Shafkat
    Badsha, Shahriar
    Shen, Haoting
    [J]. 2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 167 - 174
  • [10] Faster Convergence on Differential Privacy-Based Federated Learning
    Weng, Shangyin
    Zhang, Lei
    Zhang, Xiaoshuai
    Imran, Muhammad Ali
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (12): : 22578 - 22589