Distributed Backdoor Attacks in Federated Learning Generated by DynamicTriggers

被引:0
|
作者
Wang, Jian [1 ,3 ]
Shen, Hong [2 ]
Liu, Xuehua [3 ]
Zhou, Hua [3 ]
Li, Yuli [3 ]
机构
[1] Macao Polytech Univ, Fac Appl Sci, Macau, Peoples R China
[2] Cent Queensland Univ, Sch Engn & Technol, Rockhampton, Qld, Australia
[3] Guangzhou Inst Software, Sch Software Technol, Guangzhou, Peoples R China
关键词
Federated learning; data poisoning; security; backdoor Attack;
D O I
10.1007/978-3-031-60391-4_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The emergence of federated learning has alleviated the dual challenges of data silos and data privacy and security in machine learning. However, this distributed learning approach makes it more susceptible to backdoor attacks, where malicious participants can conduct adversarial attacks by injecting backdoor triggers into their local training datasets, aiming to manipulate model predictions, for example, make the classifier recognize poisoned samples (injected with specific triggers) as specific images. In order to effectively detect backdoor attacks and protect federated learning systems, we need to know how backdoor attacks are generated and developed. Currently, most backdoor attacks to federated learning use centralized attacks with static triggers, which are easily detectable by current defense methods. In this work, we propose a distributed backdoor attack method that fully leverages the distributed nature of federated learning. It starts by generating unique and independent global dynamic triggers for infected benign samples and then decomposes the global trigger into multiple sub-triggers, embedding them into the training sets of multiple participants. During the training phase, data poisoning is introduced. Through extensive experiments, we demonstrate that this attack method exhibits higher persistence and stealthiness, achieving a significantly higher success rate than standard centralized backdoor attacks. Compared to classical distributed backdoor attack (DBA) methods, it shows noticeable improvements in attack performance.
引用
收藏
页码:178 / 193
页数:16
相关论文
共 50 条
  • [1] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [2] Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
    Guo, Yifan
    Wang, Qianlong
    Ji, Tianxi
    Wang, Xufei
    Li, Pan
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1172 - 1182
  • [3] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [5] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    [J]. 2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [6] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [7] Efficient and Secure Federated Learning Against Backdoor Attacks
    Miao, Yinbin
    Xie, Rongpeng
    Li, Xinghua
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4619 - 4636
  • [8] Towards Practical Backdoor Attacks on Federated Learning Systems
    Shi, Chenghui
    Ji, Shouling
    Pan, Xudong
    Zhang, Xuhong
    Zhang, Mi
    Yang, Min
    Zhou, Jun
    Yin, Jianwei
    Wang, Ting
    [J]. IEEE Transactions on Dependable and Secure Computing, 2024, 21 (06) : 5431 - 5447
  • [9] IBA: Towards Irreversible Backdoor Attacks in Federated Learning
    Dung Thuy Nguyen
    Tuan Nguyen
    Tuan Anh Tran
    Doan, Khoa D.
    Wong, Kok-Seng
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Backdoor attacks against distributed swarm learning
    Chen, Kongyang
    Zhang, Huaiyuan
    Feng, Xiangyu
    Zhang, Xiaoting
    Mi, Bing
    Jin, Zhiping
    [J]. ISA TRANSACTIONS, 2023, 141 : 59 - 72