Federated Learning Backdoor Defense Based on Watermark Integrity

被引:0
|
作者
Hou, Yinjian [1 ]
Zhao, Yancheng [1 ]
Yao, Kaiqi [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Hunan, Peoples R China
关键词
federated learning; poisoning attacks; high poisoned proportion; watermark integrity;
D O I
10.1109/BIGDIA63733.2024.10808344
中图分类号
学科分类号
摘要
As the application of federated learning becomes widespread, security issues, especially the threat of backdoor attacks, become increasingly prominent. Current defense measures against backdoor poisoning in federated learning are less effective when the proportion of poisoned data injected by malicious participants exceeds 50%. To address this issue, we propose a backdoor defense method for federated learning based on model watermarking. This method generates an initial global model with a watermark through the aggregation server and distributes it to local servers. By using the integrity of the returned watermark, it detects malicious participants, effectively enhancing the robustness of the global model. Experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets demonstrate that our method can effectively detect and defend against backdoor poisoning attacks under a high proportion of poisoned data, as well as different triggers, attack methods, and scales.
引用
收藏
页码:288 / 294
页数:7
相关论文
共 50 条
  • [1] Federated Learning Watermark Based on Model Backdoor
    Li X.
    Deng T.-P.
    Xiong J.-B.
    Jin B.
    Lin J.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (07): : 3454 - 3468
  • [2] Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning
    Liu, Jialang
    Guo, Yanming
    Lao, Mingrui
    Yu, Tianyuan
    Wu, Yulun
    Feng, Yunhao
    Wu, Jiazhuang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2607 - 2626
  • [3] Backdoor defense method in federated learning based on contrastive training
    Zhang J.
    Zhu C.
    Cheng X.
    Sun X.
    Chen B.
    Tongxin Xuebao/Journal on Communications, 45 (03): : 182 - 196
  • [4] BayBFed: Bayesian Backdoor Defense for Federated Learning
    Kumari, Kavita
    Rieger, Phillip
    Fereidooni, Hossein
    Jadliwala, Murtuza
    Sadeghi, Ahmad-Reza
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 737 - 754
  • [5] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    COMPUTERS & SECURITY, 2022, 121
  • [6] Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
    Guo J.-J.
    Liu J.-Z.
    Ma Y.
    Liu Z.-Q.
    Xiong Y.-P.
    Miao K.
    Li J.-X.
    Ma J.-F.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (03): : 662 - 676
  • [7] Knowledge Distillation Based Defense for Audio Trigger Backdoor in Federated Learning
    Chen, Yu-Wen
    Ke, Bo-Hsu
    Chen, Bo-Zhong
    Chiu, Si-Rong
    Tu, Chun-Wei
    Kuo, Jian-Jhih
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4271 - 4276
  • [8] Successive Interference Cancellation Based Defense for Trigger Backdoor in Federated Learning
    Chen, Yu-Wen
    Ke, Bo-Hsu
    Chen, Bo-Zhong
    Chiu, Si-Rong
    Tu, Chun-Wei
    Kuo, Jian-Jhih
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 26 - 32
  • [9] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [10] Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training
    Huang, Tiansheng
    Hu, Sihao
    Chow, Ka-Ho
    Ilhan, Fatih
    Tekin, Selim Furkan
    Liu, Ling
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,