BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation

被引:0
|
作者
Zhang, Jiale [1 ]
Zhu, Chengcheng [1 ]
Ge, Chunpeng [2 ]
Ma, Chuan [3 ]
Zhao, Yanchao [4 ]
Sun, Xiaobing [1 ]
Chen, Bing [4 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Peoples R China
[2] Shandong Univ, Sch Software, Jinan 250000, Peoples R China
[3] Zhejiang Lab, Hangzhou 311100, Peoples R China
[4] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Data models; Training; Germanium; Federated learning; Degradation; Watermarking; Training data; backdoor attacks; multi-teacher distillation; attention transfer;
D O I
10.1109/TDSC.2024.3354049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted inputs being misclassified as the targeted labels. To address such attack, several defense mechanisms are proposed, but may lose the effectiveness due to the following drawbacks. First, current methods heavily rely on massive labeled clean data, which is an impractical setting in FL. Moreover, an in-avoidable performance degradation usually occurs in the defensive procedure. To alleviate such concerns, we propose BadCleaner, a lossless and efficient backdoor defense scheme via attention-based federated multi-teacher distillation. First, BadCleaner can effectively tune the backdoored joint model without performance degradation, by distilling the in-depth knowledge from multiple teachers with only a small part of unlabeled clean data. Second, to fully eliminate the hidden backdoor patterns, we present an attention transfer method to alleviate the attention of models to the trigger regions. The extensive evaluation demonstrates that BadCleaner can reduce the success rates of state-of-the-art backdoor attacks without compromising the model performance.
引用
收藏
页码:4559 / 4573
页数:15
相关论文
共 50 条
  • [1] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    [J]. ELECTRONICS, 2023, 12 (23)
  • [2] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [3] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [4] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [5] ADFL: Defending backdoor attacks in fe derate d learning via adversarial distillation
    Zhu, Chengcheng
    Zhang, Jiale
    Sun, Xiaobing
    Chen, Bing
    Meng, Weizhi
    [J]. COMPUTERS & SECURITY, 2023, 132
  • [6] Learning Lightweight Object Detectors via Multi-Teacher Progressive Distillation
    Cao, Shengcao
    Li, Mengtian
    Hays, James
    Ramanan, Deva
    Wang, Yu-Xiong
    Gui, Liang-Yan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [7] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [8] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [9] Knowledge Distillation via Multi-Teacher Feature Ensemble
    Ye, Xin
    Jiang, Rongxin
    Tian, Xiang
    Zhang, Rui
    Chen, Yaowu
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 566 - 570
  • [10] Fed-NAD: Backdoor Resilient Federated Learning via Neural Attention Distillation
    Ma, Hao
    Qi, Senmao
    Yao, Jiayue
    Yuan, Yuan
    Zou, Yifei
    Yu, Dongxiao
    [J]. PROCEEDINGS OF THE 2024 IEEE 10TH INTERNATIONAL CONFERENCE ON INTELLIGENT DATA AND SECURITY, IDS 2024, 2024, : 7 - 13