Federated Learning Backdoor Defense Based on Watermark Integrity

被引:0
|
作者
Hou, Yinjian [1 ]
Zhao, Yancheng [1 ]
Yao, Kaiqi [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Hunan, Peoples R China
关键词
federated learning; poisoning attacks; high poisoned proportion; watermark integrity;
D O I
10.1109/BIGDIA63733.2024.10808344
中图分类号
学科分类号
摘要
As the application of federated learning becomes widespread, security issues, especially the threat of backdoor attacks, become increasingly prominent. Current defense measures against backdoor poisoning in federated learning are less effective when the proportion of poisoned data injected by malicious participants exceeds 50%. To address this issue, we propose a backdoor defense method for federated learning based on model watermarking. This method generates an initial global model with a watermark through the aggregation server and distributes it to local servers. By using the integrity of the returned watermark, it detects malicious participants, effectively enhancing the robustness of the global model. Experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets demonstrate that our method can effectively detect and defend against backdoor poisoning attacks under a high proportion of poisoned data, as well as different triggers, attack methods, and scales.
引用
收藏
页码:288 / 294
页数:7
相关论文
共 50 条
  • [21] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [23] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [24] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [25] On the Vulnerability of Backdoor Defenses for Federated Learning
    Fang, Pei
    Chen, Jinghui
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 10, 2023, : 11800 - 11808
  • [26] Federated Learning Backdoor Attack Based on Frequency Domain Injection
    Liu, Jiawang
    Peng, Changgen
    Tan, Weijie
    Shi, Chenghui
    ENTROPY, 2024, 26 (02)
  • [27] BaFFLe: Backdoor Detection via Feedback -based Federated Learning
    Andreina, Sebastien
    Marson, Giorgia Azzurra
    Moellering, Helen
    Karame, Ghassan
    2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 852 - 863
  • [28] Backdoor Federated Learning-Based mmWave Beam Selection
    Zhang, Zhengming
    Yang, Ruming
    Zhang, Xiangyu
    Li, Chunguo
    Huang, Yongming
    Yang, Luxi
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (10) : 6563 - 6578
  • [29] Byzantine Robust Federated Learning Scheme Based on Backdoor Triggers
    Yang, Zheng
    Gu, Ke
    Zuo, Yiming
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (02): : 2813 - 2831
  • [30] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897