On the Vulnerability of Backdoor Defenses for Federated Learning

被引:0
|
作者
Fang, Pei [1 ]
Chen, Jinghui [2 ]
机构
[1] Tongji Univ, Shanghai, Peoples R China
[2] Penn State Univ, University Pk, PA 16802 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data. However, its repetitive serverclient communication gives room for backdoor attacks with aim to mislead the global model into a targeted misprediction when a specific trigger pattern is presented. In response to such backdoor threats on federated learning, various defense measures have been proposed. In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning in a practical setting by proposing a new federated backdoor attack method for possible countermeasures. Different from traditional training (on triggered data) and rescaling (the malicious client model) based backdoor injection, the proposed backdoor attack framework (1) directly modifies (a small proportion of) local model weights to inject the backdoor trigger via sign flips; (2) jointly optimize the trigger pattern with the client model, thus is more persistent and stealthy for circumventing existing defenses. In a case study, we examine the strength and weaknesses of recent federated backdoor defenses from three major categories and provide suggestions to the practitioners when training federated models in practice.
引用
收藏
页码:11800 / 11808
页数:9
相关论文
共 50 条
  • [31] A Detailed Survey on Federated Learning Attacks and Defenses
    Sikandar, Hira Shahzadi
    Waheed, Huda
    Tahir, Sibgha
    Malik, Saif U. R.
    Rafique, Waqas
    ELECTRONICS, 2023, 12 (02)
  • [32] Privacy and Robustness in Federated Learning: Attacks and Defenses
    Lyu, Lingjuan
    Yu, Han
    Ma, Xingjun
    Chen, Chen
    Sun, Lichao
    Zhao, Jun
    Yang, Qiang
    Yu, Philip S.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8726 - 8746
  • [33] A Survey of Federated Learning: Review, Attacks, Defenses
    Yao, Zhongyi
    Cheng, Jieren
    Fu, Cebin
    Huang, Zhennan
    BIG DATA AND SECURITY, ICBDS 2023, PT I, 2024, 2099 : 166 - 177
  • [34] Enhancing robustness of backdoor attacks against backdoor defenses
    Hu, Bin
    Guo, Kehua
    Ren, Sheng
    Fang, Hui
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [35] Fisher Calibration for Backdoor-Robust Heterogeneous Federated Learning
    Huang, Wenke
    Ye, Mang
    Shi, Zekun
    Du, Bo
    Tao, Dacheng
    COMPUTER VISION - ECCV 2024, PT XV, 2025, 15073 : 247 - 265
  • [36] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [37] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [38] Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
    Qin, Zeyu
    Yao, Liuyi
    Chen, Daoyuan
    Li, Yaliang
    Ding, Bolin
    Cheng, Minhao
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4743 - 4755
  • [39] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [40] Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training
    Huang, Tiansheng
    Hu, Sihao
    Chow, Ka-Ho
    Ilhan, Fatih
    Tekin, Selim Furkan
    Liu, Ling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,