Defending Against Data and Model Backdoor Attacks in Federated Learning

被引:1
|
作者
Wang, Hao [1 ,2 ,3 ]
Mu, Xuejiao [1 ,3 ]
Wang, Dong [4 ]
Xu, Qiang [5 ]
Li, Kaiju [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Minist Culture & Tourism, Key Lab Tourism Multisource Data Percept & Decis, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Key Lab Cyberspace Big Data Intelligent Secur, Minist Educ, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[5] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[6] Guizhou Univ Finance & Econ, Sch Informat, Guiyang 550025, Guizhou, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
关键词
Data models; Training; Servers; Computational modeling; Filtering; Low-pass filters; Backdoor attack; Differential privacy; federated learning (FL); homomorphic encryption; spectrum filtering;
D O I
10.1109/JIOT.2024.3415628
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can complete collaborative model training without transferring local data, which can greatly improve the training efficiency. However, FL is susceptible data and model backdoor attacks. To address data backdoor attack, in this article, we propose a defense method named TSF. TSF transforms data from time domain to frequency domain and subsequently designs a low-pass filter to mitigate the impact of high-frequency signals introduced by backdoor samples. Additionally, we undergo homomorphic encryption on local updates to prevent the server from inferring user's data. We also introduce a defense method against model backdoor attack named ciphertext field similarity detect differential privacy (CFSD-DP). CFSD-DP screens malicious updates using cosine similarity detection in the ciphertext domain. It perturbs the global model using differential privacy mechanism to mitigate the impact of model backdoor attack. It can effectively detect malicious updates and safeguard the privacy of the global model. Experimental results show that the proposed TSF and CFSD-DP have 73.8% degradation in backdoor accuracy while only 3% impact on the main task accuracy compared with state-of-the-art schemes. Code is available at https://github.com/whwh456/TSF.
引用
收藏
页码:39276 / 39294
页数:19
相关论文
共 50 条
  • [41] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 946 - 958
  • [42] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [43] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, 2022, : 946 - 958
  • [44] Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
    Tejankar, Ajinkya
    Sanjabi, Maziar
    Wang, Qifan
    Wang, Sinong
    Firooz, Hamed
    Pirsiavash, Hamed
    Tan, Liang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12239 - 12249
  • [45] SPECTRE Defending Against Backdoor Attacks Using Robust Statistics
    Hayase, Jonathan
    Kong, Weihao
    Somani, Raghav
    Oh, Sewoong
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [46] Artemis: Defending Against Backdoor Attacks via Distribution Shift
    Xue, Meng
    Wang, Zhixian
    Zhang, Qian
    Gong, Xueluan
    Liu, Zhihang
    Chen, Yanjiao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1781 - 1795
  • [47] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [48] Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers
    Wang, Jian
    Shen, Hong
    Liu, Xuehua
    Zhou, Hua
    Li, Yuli
    INFORMATION SECURITY THEORY AND PRACTICE, WISTP 2024, 2024, 14625 : 178 - 193
  • [49] Scope: On Detecting Constrained Backdoor Attacks in Federated Learning
    Huang, Siquan
    Li, Yijiang
    Yan, Xingfu
    Gao, Ying
    Chen, Chong
    Shi, Leyu
    Chen, Biao
    Ng, Wing W. Y.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3302 - 3315
  • [50] Backdoor Attacks in Peer-to-Peer Federated Learning
    Syros, Georgios
    Yar, Gokberk
    Boboila, Simona
    Nita-Rotaru, Cristina
    Oprea, Alina
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2025, 28 (01)