BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation

被引:0
|
作者
Zhang, Jiale [1 ]
Zhu, Chengcheng [1 ]
Ge, Chunpeng [2 ]
Ma, Chuan [3 ]
Zhao, Yanchao [4 ]
Sun, Xiaobing [1 ]
Chen, Bing [4 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Peoples R China
[2] Shandong Univ, Sch Software, Jinan 250000, Peoples R China
[3] Zhejiang Lab, Hangzhou 311100, Peoples R China
[4] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Data models; Training; Germanium; Federated learning; Degradation; Watermarking; Training data; backdoor attacks; multi-teacher distillation; attention transfer;
D O I
10.1109/TDSC.2024.3354049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted inputs being misclassified as the targeted labels. To address such attack, several defense mechanisms are proposed, but may lose the effectiveness due to the following drawbacks. First, current methods heavily rely on massive labeled clean data, which is an impractical setting in FL. Moreover, an in-avoidable performance degradation usually occurs in the defensive procedure. To alleviate such concerns, we propose BadCleaner, a lossless and efficient backdoor defense scheme via attention-based federated multi-teacher distillation. First, BadCleaner can effectively tune the backdoored joint model without performance degradation, by distilling the in-depth knowledge from multiple teachers with only a small part of unlabeled clean data. Second, to fully eliminate the hidden backdoor patterns, we present an attention transfer method to alleviate the attention of models to the trigger regions. The extensive evaluation demonstrates that BadCleaner can reduce the success rates of state-of-the-art backdoor attacks without compromising the model performance.
引用
收藏
页码:4559 / 4573
页数:15
相关论文
共 50 条
  • [21] Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective
    Qin, Zhen
    Chen, Feiyi
    Zhi, Chen
    Yan, Xueqiang
    Deng, Shuiguang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14677 - 14685
  • [22] Enhancing Recommendation Capabilities Using Multi-Head Attention-Based Federated Knowledge Distillation
    Wu, Aming
    Kwon, Young-Woo
    [J]. IEEE ACCESS, 2023, 11 : 45850 - 45861
  • [23] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    [J]. COMPUTERS & SECURITY, 2023, 133
  • [24] BaFFLe: Backdoor Detection via Feedback -based Federated Learning
    Andreina, Sebastien
    Marson, Giorgia Azzurra
    Moellering, Helen
    Karame, Ghassan
    [J]. 2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 852 - 863
  • [25] A multi-graph neural group recommendation model with meta-learning and multi-teacher distillation
    Zhou, Weizhen
    Huang, Zhenhua
    Wang, Cheng
    Chen, Yunwen
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [26] Mitigating Biases in Student Performance Prediction via Attention-Based Personalized Federated Learning
    Chu, Yun-Wei
    Hosseinalipour, Seyyedali
    Tenorio, Elizabeth
    Cruz, Laura
    Douglas, Kerrie
    Lan, Andrew
    Brinton, Christopher
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 3033 - 3042
  • [27] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    [J]. 2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [28] FLEDGE: Ledger-based Federated Learning Resilient to Inference and Backdoor Attacks
    Castillo, Jorge
    Rieger, Phillip
    Fereidooni, Hossein
    Chen, Qian
    Sadeghi, Ahmad-Reza
    [J]. 39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 647 - 661
  • [29] DE-MKD: Decoupled Multi-Teacher Knowledge Distillation Based on Entropy
    Cheng, Xin
    Zhang, Zhiqiang
    Weng, Wei
    Yu, Wenxin
    Zhou, Jinjia
    [J]. MATHEMATICS, 2024, 12 (11)
  • [30] Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector
    Shang, Ronghua
    Li, Wenzheng
    Zhu, Songling
    Jiao, Licheng
    Li, Yangyang
    [J]. NEURAL NETWORKS, 2023, 164 : 345 - 356