Dual-domain based backdoor attack against federated learning

被引:1
|
作者
Li, Guorui [1 ,2 ]
Chang, Runxing [1 ]
Wang, Ying [3 ]
Wang, Cong [1 ,2 ]
机构
[1] Northeastern Univ, Sch Comp Sci & Engn, Shenyang 110819, Peoples R China
[2] Northeastern Univ Qinhuangdao, Hebei Key Lab Marine Percept Network & Data Proc, Qinhuangdao 066004, Peoples R China
[3] Qinhuangdao Vocat & Tech Coll, Dept Informat Engn, Qinhuangdao 066100, Peoples R China
关键词
Backdoor attack; Federated learning; Frequency domain; Spatial domain; Trigger;
D O I
10.1016/j.neucom.2025.129424
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The distributed training feature and data heterogeneity in federated learning (FL) render it susceptible to various threats, in which the backdoor attack stands out as the most destructive one. By injecting malicious functionality into the global model through poisoned updates, backdoor attacks can generate attacker-desired inference results on the trigger-embedded inputs while behaving normally on other data instances. The current backdoor triggers are of significant visual features that can be easily identified by humans or computers. Meanwhile, the common model update clipping mechanism is too simple and straightforward to be recognized by various defense methods with ease. Aiming at the above shortcomings, we proposed a dual-domain based backdoor attack (DDBA) against FL in this paper. On the one hand, DDBA generates an imperceptible dual- domain trigger for any image by superimposing in its low-frequency region of the amplitude spectrum and then applying a slight spatial distortion subsequently. On the other hand, DDBA truncates the model update dynamically based on a newly designed adaptive clipping mechanism to enhance its stealthiness. Finally, we carried out extensive experiments to evaluate the attack performance and stealth performance of DDBA on four publicly available datasets. The experiment results show that DDBA has excellent attack performance in both single-shot and multiple-shot attack scenarios as well as robust stealth performance under the existing defense methods against backdoor attacks.
引用
收藏
页数:12
相关论文
共 50 条
  • [41] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [42] A Practical Clean -Label Backdoor Attack with Limited Information in Vertical Federated Learning
    Chen, Peng
    Yang, Jirui
    Lin, Junxiong
    Lu, Zhihui
    Duan, Qiang
    Chai, Hongfeng
    23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 41 - 50
  • [43] Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic Classification
    Nazzal, Mahmoud
    Aljaafari, Nura
    Sawalmeh, Ahmed
    Khreishah, Abdallah
    Anan, Muhammad
    Algosaibi, Abdulelah
    Alnaeem, Mohammed
    Aldalbahi, Adel
    Alhumam, Abdulaziz
    Vizcarra, Conrado P.
    Alhamed, Shadan
    2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 204 - 209
  • [44] How To Backdoor Federated Learning
    Bagdasaryan, Eugene
    Veit, Andreas
    Hua, Yiqing
    Estrin, Deborah
    Shmatikov, Vitaly
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2938 - 2947
  • [45] Federated Learning Backdoor Defense Based on Watermark Integrity
    Hou, Yinjian
    Zhao, Yancheng
    Yao, Kaiqi
    2024 10TH INTERNATIONAL CONFERENCE ON BIG DATA AND INFORMATION ANALYTICS, BIGDIA 2024, 2024, : 288 - 294
  • [46] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [47] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [48] BadRL: Sparse Targeted Backdoor Attack against Reinforcement Learning
    Cui, Jing
    Han, Yufei
    Ma, Yuzhe
    Jiao, Jianbin
    Zhang, Junge
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 11687 - 11694
  • [49] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [50] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62