Federated Learning Backdoor Defense Based on Watermark Integrity

被引:0
|
作者
Hou, Yinjian [1 ]
Zhao, Yancheng [1 ]
Yao, Kaiqi [1 ]
机构
[1] Natl Univ Def Technol, Coll Syst Engn, Changsha, Hunan, Peoples R China
关键词
federated learning; poisoning attacks; high poisoned proportion; watermark integrity;
D O I
10.1109/BIGDIA63733.2024.10808344
中图分类号
学科分类号
摘要
As the application of federated learning becomes widespread, security issues, especially the threat of backdoor attacks, become increasingly prominent. Current defense measures against backdoor poisoning in federated learning are less effective when the proportion of poisoned data injected by malicious participants exceeds 50%. To address this issue, we propose a backdoor defense method for federated learning based on model watermarking. This method generates an initial global model with a watermark through the aggregation server and distributes it to local servers. By using the integrity of the returned watermark, it detects malicious participants, effectively enhancing the robustness of the global model. Experiments conducted on the CIFAR-10 and Tiny-ImageNet datasets demonstrate that our method can effectively detect and defend against backdoor poisoning attacks under a high proportion of poisoned data, as well as different triggers, attack methods, and scales.
引用
收藏
页码:288 / 294
页数:7
相关论文
共 50 条
  • [31] Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Abad, Gorka
    Paguada, Servio
    Ersoy, Oguzhan
    Picek, Stjepan
    Ramirez-Duran, Victor Julio
    Urbieta, Aitor
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 377 - 391
  • [32] Backdoor Federated Learning by Poisoning Key Parameters
    Song, Xuan
    Li, Huibin
    Hu, Kailang
    Zai, Guangjun
    ELECTRONICS, 2025, 14 (01):
  • [33] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    COMPUTERS & SECURITY, 2023, 133
  • [34] Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network
    Chen D.
    Fu A.
    Zhou C.
    Chen Z.
    Fu, Anmin (fuam@njust.edu.cn); Fu, Anmin (fuam@njust.edu.cn), 1600, Science Press (58): : 2364 - 2373
  • [35] Dual-domain based backdoor attack against federated learning
    Li, Guorui
    Chang, Runxing
    Wang, Ying
    Wang, Cong
    NEUROCOMPUTING, 2025, 623
  • [36] ScanFed: Scalable Behavior-based Backdoor Detection in Federated Learning
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wang, Chonggang
    Li, Xu
    Gazda, Robert
    Cho, Jin-Hee
    Wu, Hongyi
    2023 IEEE 43RD INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS, ICDCS, 2023, : 782 - 793
  • [37] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [38] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [39] BDEL: A Backdoor Attack Defense Method Based on Ensemble Learning
    Xing, Zhihuan
    Lan, Yuqing
    Yu, Yin
    Cao, Yong
    Yang, Xiaoyi
    Yu, Yichun
    Yu, Dan
    PRICAI 2024: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2025, 15281 : 221 - 235
  • [40] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028