Invariant Aggregator for Defending against Federated Backdoor Attacks

被引:0
|
作者
Wang, Xiaoyang [1 ,4 ]
Dimitriadis, Dimitrios
Koyejo, Sanmi [2 ]
Tople, Shruti [3 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Stanford Univ, Stanford, CA USA
[3] Azure Res, Redmond, WA USA
[4] Microsoft Reseach, Redmond, WA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning enables training highutility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Despite the theoretical and empirical success in defending against attacks that aim to degrade models' utility, defense against backdoor attacks that increase model accuracy on backdoor samples exclusively without hurting the utility on other samples remains challenging. To this end, we first analyze the failure modes of existing defenses over a flat loss landscape, which is common for well-designed neural networks such as Resnet (He et al., 2015) but is often overlooked by previous works. Then, we propose an invariant aggregator that redirects the aggregated update to invariant directions that are generally useful via selectively masking out the update elements that favor few and possibly malicious clients. Theoretical results suggest that our approach provably mitigates backdoor attacks and remains effective over flat loss landscapes. Empirical results on three datasets with different modalities and varying numbers of clients further demonstrate that our approach mitigates a broad class of backdoor attacks with a negligible cost on the model utility.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [2] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [3] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [4] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [5] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [6] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [7] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [8] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [9] Defending Against Backdoor Attacks by Quarantine Training
    Yu, Chengxu
    Zhang, Yulai
    IEEE ACCESS, 2024, 12 : 10681 - 10689
  • [10] Defending against Backdoor Attacks in Natural Language Generation
    Sun, Xiaofei
    Li, Xiaoya
    Meng, Yuxian
    Ao, Xiang
    Lyu, Lingjuan
    Li, Jiwei
    Zhang, Tianwei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 5257 - 5265