BadCleaner: Defending Backdoor Attacks in Federated Learning via Attention-Based Multi-Teacher Distillation

被引:0
|
作者
Zhang, Jiale [1 ]
Zhu, Chengcheng [1 ]
Ge, Chunpeng [2 ]
Ma, Chuan [3 ]
Zhao, Yanchao [4 ]
Sun, Xiaobing [1 ]
Chen, Bing [4 ]
机构
[1] Yangzhou Univ, Sch Informat Engn, Yangzhou 225127, Peoples R China
[2] Shandong Univ, Sch Software, Jinan 250000, Peoples R China
[3] Zhejiang Lab, Hangzhou 311100, Peoples R China
[4] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Data models; Training; Germanium; Federated learning; Degradation; Watermarking; Training data; backdoor attacks; multi-teacher distillation; attention transfer;
D O I
10.1109/TDSC.2024.3354049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) has been proven to be vulnerable to various attacks, among which backdoor attack is one of the toughest. In this attack, malicious users attempt to embed backdoor triggers into local models, resulting in the crafted inputs being misclassified as the targeted labels. To address such attack, several defense mechanisms are proposed, but may lose the effectiveness due to the following drawbacks. First, current methods heavily rely on massive labeled clean data, which is an impractical setting in FL. Moreover, an in-avoidable performance degradation usually occurs in the defensive procedure. To alleviate such concerns, we propose BadCleaner, a lossless and efficient backdoor defense scheme via attention-based federated multi-teacher distillation. First, BadCleaner can effectively tune the backdoored joint model without performance degradation, by distilling the in-depth knowledge from multiple teachers with only a small part of unlabeled clean data. Second, to fully eliminate the hidden backdoor patterns, we present an attention transfer method to alleviate the attention of models to the trigger regions. The extensive evaluation demonstrates that BadCleaner can reduce the success rates of state-of-the-art backdoor attacks without compromising the model performance.
引用
收藏
页码:4559 / 4573
页数:15
相关论文
共 50 条
  • [31] DE-MKD: Decoupled Multi-Teacher Knowledge Distillation Based on Entropy
    Cheng, Xin
    Zhang, Zhiqiang
    Weng, Wei
    Yu, Wenxin
    Zhou, Jinjia
    [J]. MATHEMATICS, 2024, 12 (11)
  • [32] Device adaptation free-KDA based on multi-teacher knowledge distillation
    Yang, Yafang
    Guo, Bin
    Liang, Yunji
    Zhao, Kaixing
    Yu, Zhiwen
    [J]. Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (10) : 3603 - 3615
  • [33] Multi-teacher knowledge distillation based on joint Guidance of Probe and Adaptive Corrector
    Shang, Ronghua
    Li, Wenzheng
    Zhu, Songling
    Jiao, Licheng
    Li, Yangyang
    [J]. NEURAL NETWORKS, 2023, 164 : 345 - 356
  • [34] Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
    Tejankar, Ajinkya
    Sanjabi, Maziar
    Wang, Qifan
    Wang, Sinong
    Firooz, Hamed
    Pirsiavash, Hamed
    Tan, Liang
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12239 - 12249
  • [35] Dual Attention-Based Federated Learning for Wireless Traffic Prediction
    Zhang, Chuanting
    Dang, Shuping
    Shihada, Basem
    Alouini, Mohamed-Slim
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [36] Federated deep active learning for attention-based transaction classification
    Usman Ahmed
    Jerry Chun-Wei Lin
    Philippe Fournier-Viger
    [J]. Applied Intelligence, 2023, 53 : 8631 - 8643
  • [37] Federated deep active learning for attention-based transaction classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Fournier-Viger, Philippe
    [J]. APPLIED INTELLIGENCE, 2023, 53 (08) : 8631 - 8643
  • [39] mKDNAD: A network flow anomaly detection method based on multi-teacher knowledge distillation
    Yang, Yang
    Liu, Dan
    [J]. 2022 16TH IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP2022), VOL 1, 2022, : 314 - 319
  • [40] Multi-teacher Universal Distillation Based on Information Hiding for Defense Against Facial Manipulation
    Li, Xin
    Ni, Rongrong
    Zhao, Yao
    Ni, Yu
    Li, Haoliang
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, : 5293 - 5307