Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach

被引:10
|
作者
Guo, Yifan [1 ]
Wang, Qianlong [2 ]
Ji, Tianxi [1 ]
Wang, Xufei [1 ]
Li, Pan [1 ]
机构
[1] Case Western Reserve Univ, Cleveland, OH 44106 USA
[2] Towson Univ, Towson, MD 21252 USA
关键词
Federated learning; distributed backdoor attacks; dynamic norm clipping;
D O I
10.1109/BigData52589.2021.9671910
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the advance in artificial intelligence and high-dimensional data analysis, federated learning (FL) has emerged to allow distributed data providers to collaboratively learn without direct access to local sensitive data. However, limiting access to individual provider's data inevitably incurs security issues. For instance, backdoor attacks, one of the most popular data poisoning attacks in FL, severely threaten the integrity and utility of the FL system. In particular, backdoor attacks launched by multiple collusive attackers, i.e., distributed backdoor attacks, can achieve high attack success rates and are hard to detect. Existing defensive approaches, like model inspection or model sanitization, often require to access a portion of local training data, which renders them inapplicable to the FL scenarios. Recently, the norm clipping approach is developed to effectively defend against distributed backdoor attacks in FL, which does not rely on local training data. However, we discover that adversaries can still bypass this defense scheme through robust training due to its unchanged norm clipping threshold. In this paper, we propose a novel defense scheme to resist distributed backdoor attacks in FL. Particularly, we first identify that the main reason for the failure of the norm clipping scheme is its fixed threshold in the training process, which cannot capture the dynamic nature of benign local updates during the global model's convergence. Motivated by it, we devise a novel defense mechanism to dynamically adjust the norm clipping threshold of local updates. Moreover, we provide the convergence analysis of our defense scheme. By evaluating it on four non-IID public datasets, we observe that our defense scheme effectively can resist distributed backdoor attacks and ensure the global model's convergence. Noticeably, our scheme reduces the attack success rates by 84.23% on average compared with existing defense schemes.
引用
收藏
页码:1172 / 1182
页数:11
相关论文
共 50 条
  • [11] IBA: Towards Irreversible Backdoor Attacks in Federated Learning
    Dung Thuy Nguyen
    Tuan Nguyen
    Tuan Anh Tran
    Doan, Khoa D.
    Wong, Kok-Seng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [12] Backdoor attacks against distributed swarm learning
    Chen, Kongyang
    Zhang, Huaiyuan
    Feng, Xiangyu
    Zhang, Xiaoting
    Mi, Bing
    Jin, Zhiping
    ISA TRANSACTIONS, 2023, 141 : 59 - 72
  • [13] Distributed Swift and Stealthy Backdoor Attack on Federated Learning
    Sundar, Agnideven Palanisamy
    Li, Feng
    Zou, Xukai
    Gao, Tianchong
    2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2022, : 193 - 200
  • [14] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [15] Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
    Mi, Yuxi
    Sun, Yiheng
    Guan, Jihong
    Zhou, Shuigeng
    WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 111 - 126
  • [16] Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
    Qin, Zeyu
    Yao, Liuyi
    Chen, Daoyuan
    Li, Yaliang
    Ding, Bolin
    Cheng, Minhao
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4743 - 4755
  • [17] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    Neural Networks, 2025, 184
  • [18] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [19] Universal adversarial backdoor attacks to fool vertical federated learning
    Chen, Peng
    Du, Xin
    Lu, Zhihui
    Chai, Hongfeng
    COMPUTERS & SECURITY, 2024, 137
  • [20] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293