Defending Against Data and Model Backdoor Attacks in Federated Learning

被引:1
|
作者
Wang, Hao [1 ,2 ,3 ]
Mu, Xuejiao [1 ,3 ]
Wang, Dong [4 ]
Xu, Qiang [5 ]
Li, Kaiju [6 ]
机构
[1] Chongqing Univ Posts & Telecommun, Minist Culture & Tourism, Key Lab Tourism Multisource Data Percept & Decis, Chongqing 400065, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Key Lab Cyberspace Big Data Intelligent Secur, Minist Educ, Chongqing 400065, Peoples R China
[3] Chongqing Univ Posts & Telecommun, Coll Comp Sci & Technol, Chongqing 400065, Peoples R China
[4] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[5] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[6] Guizhou Univ Finance & Econ, Sch Informat, Guiyang 550025, Guizhou, Peoples R China
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
关键词
Data models; Training; Servers; Computational modeling; Filtering; Low-pass filters; Backdoor attack; Differential privacy; federated learning (FL); homomorphic encryption; spectrum filtering;
D O I
10.1109/JIOT.2024.3415628
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can complete collaborative model training without transferring local data, which can greatly improve the training efficiency. However, FL is susceptible data and model backdoor attacks. To address data backdoor attack, in this article, we propose a defense method named TSF. TSF transforms data from time domain to frequency domain and subsequently designs a low-pass filter to mitigate the impact of high-frequency signals introduced by backdoor samples. Additionally, we undergo homomorphic encryption on local updates to prevent the server from inferring user's data. We also introduce a defense method against model backdoor attack named ciphertext field similarity detect differential privacy (CFSD-DP). CFSD-DP screens malicious updates using cosine similarity detection in the ciphertext domain. It perturbs the global model using differential privacy mechanism to mitigate the impact of model backdoor attack. It can effectively detect malicious updates and safeguard the privacy of the global model. Experimental results show that the proposed TSF and CFSD-DP have 73.8% degradation in backdoor accuracy while only 3% impact on the main task accuracy compared with state-of-the-art schemes. Code is available at https://github.com/whwh456/TSF.
引用
收藏
页码:39276 / 39294
页数:19
相关论文
共 50 条
  • [31] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [32] Defending against Backdoor Attacks in Natural Language Generation
    Sun, Xiaofei
    Li, Xiaoya
    Meng, Yuxian
    Ao, Xiang
    Lyu, Lingjuan
    Li, Jiwei
    Zhang, Tianwei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 5257 - 5265
  • [33] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [34] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [35] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897
  • [36] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028
  • [37] DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10711 - 10719
  • [38] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [39] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [40] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)