Stealthy Backdoor Attack Against Federated Learning Through Frequency Domain by Backdoor Neuron Constraint and Model Camouflage

被引:1
|
作者
Qiao, Yanqi [1 ]
Liu, Dazhuang [1 ]
Wang, Rui [1 ]
Liang, Kaitai [1 ]
机构
[1] Delft Univ Technol, Fac Elect Engn Math & Comp Sci, NL-2600 AA Delft, Netherlands
关键词
Federated learning; backdoor attacks; stealthiness; frequency domain; backdoor neuron; model camouflage; activation value;
D O I
10.1109/JETCAS.2024.3450527
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated Learning (FL) is a beneficial decentralized learning approach for preserving the privacy of local datasets of distributed agents. However, the distributed property of FL and untrustworthy data introducing the vulnerability to backdoor attacks. In this attack scenario, an adversary manipulates its local data with a specific trigger and trains a malicious local model to implant the backdoor. During inference, the global model would misbehave for any input with the trigger to the attacker-chosen prediction. Most existing backdoor attacks against FL focus on bypassing defense mechanisms, without considering the inspection of model parameters on the server. These attacks are susceptible to detection through dynamic clustering based on model parameter similarity. Besides, current methods provide limited imperceptibility of their trigger in the spatial domain. To address these limitations, we propose a stealthy backdoor attack called "Chironex" against FL with an imperceptible trigger in frequency space to deliver attack effectiveness, stealthiness and robustness against various countermeasures on FL. We first design a frequency trigger function to generate an imperceptible frequency trigger to evade human inspection. Then we fully exploit the attacker's advantage to enhance attack robustness by estimating benign updates and analyzing the impact of the backdoor on model parameters through a task-sensitive neuron searcher. It disguises malicious updates as benign ones by reducing the impact of backdoor neurons that greatly contribute to the backdoor task based on activation value, and encouraging them to update towards benign model parameters trained by the attacker. We conduct extensive experiments on various image classifiers with real-world datasets to provide empirical evidence that Chironex can evade the most recent robust FL aggregation algorithms, and further achieve a distinctly higher attack success rate than existing attacks, without undermining the utility of the global model.
引用
收藏
页码:661 / 672
页数:12
相关论文
共 50 条
  • [21] An Invisible Black-Box Backdoor Attack Through Frequency Domain
    Wang, Tong
    Yao, Yuan
    Xu, Feng
    An, Shengwei
    Tong, Hanghang
    Wang, Ting
    COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 396 - 413
  • [22] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 5661 - 5674
  • [23] Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning
    Yang, Deshan
    Luo, Senlin
    Zhou, Jinjie
    Pan, Limin
    Yang, Xiaonan
    Xing, Jiyuan
    INFORMATION SCIENCES, 2023, 651
  • [24] Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios
    Mei, Haochen
    Li, Gaolei
    Wu, Jun
    Zheng, Longfei
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [25] Federated Learning Watermark Based on Model Backdoor
    Li X.
    Deng T.-P.
    Xiong J.-B.
    Jin B.
    Lin J.
    Ruan Jian Xue Bao/Journal of Software, 2024, 35 (07): : 3454 - 3468
  • [26] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Liu, Tao
    Li, Mingjun
    Zheng, Haibin
    Ming, Zhaoyan
    Chen, Jinyin
    MULTIMEDIA SYSTEMS, 2023, 29 (02) : 553 - 568
  • [27] Survey of Backdoor Attack and Defense Algorithms Based on Federated Learning
    Liu, Jialang
    Guo, Yanming
    Lao, Mingrui
    Yu, Tianyuan
    Wu, Yulun
    Feng, Yunhao
    Wu, Jiazhuang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2607 - 2626
  • [28] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [29] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Tao Liu
    Mingjun Li
    Haibin Zheng
    Zhaoyan Ming
    Jinyin Chen
    Multimedia Systems, 2023, 29 : 553 - 568
  • [30] Poison Egg: Scrambling Federated Learning with Delayed Backdoor Attack
    Tsutsui, Masayoshi
    Kaneko, Tatsuya
    Takamaeda-Yamazaki, Shinya
    UBIQUITOUS SECURITY, UBISEC 2023, 2024, 2034 : 191 - 204