IBA: Towards Irreversible Backdoor Attacks in Federated Learning

被引:0
|
作者
Dung Thuy Nguyen [1 ,2 ,5 ]
Tuan Nguyen [2 ,3 ,4 ]
Tuan Anh Tran
Doan, Khoa D. [2 ,3 ]
Wong, Kok-Seng [2 ,3 ]
机构
[1] Vanderbilt Univ, Dept Comp Sci, Nashville, TN 37212 USA
[2] VinUniv, VinUni Illinois Smart Hlth Ctr, Hanoi, Vietnam
[3] VinUniv, Coll Engn & Comp Sci, Hanoi, Vietnam
[4] VinAI Res, Hanoi, Vietnam
[5] VinUniv, Hanoi, Vietnam
关键词
TAXONOMY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) is a distributed learning approach that enables machine learning models to be trained on decentralized data without compromising end devices' personal, potentially sensitive data. However, the distributed nature and uninvestigated data intuitively introduce new security vulnerabilities, including backdoor attacks. In this scenario, an adversary implants backdoor functionality into the global model during training, which can be activated to cause the desired misbehaviors for any input with a specific adversarial pattern. Despite having remarkable success in triggering and distorting model behavior, prior backdoor attacks in FL often hold impractical assumptions, limited imperceptibility, and durability. Specifically, the adversary needs to control a sufficiently large fraction of clients or know the data distribution of other honest clients. In many cases, the trigger inserted is often visually apparent, and the backdoor effect is quickly diluted if the adversary is removed from the training process. To address these limitations, we propose a novel backdoor attack framework in FL, the Irreversible Backdoor Attack (IBA), that jointly learns the optimal and visually stealthy trigger and then gradually implants the backdoor into a global model. This approach allows the adversary to execute a backdoor attack that can evade both human and machine inspections. Additionally, we enhance the efficiency and durability of the proposed attack by selectively poisoning the model's parameters that are least likely updated by the main task's learning process and constraining the poisoned model update to the vicinity of the global model. Finally, we evaluate the proposed attack framework on several benchmark datasets, including MNIST, CIFAR-10, and Tiny ImageNet, and achieved high success rates while simultaneously bypassing existing backdoor defenses and achieving a more durable backdoor effect compared to other backdoor attacks. Overall, IBA(2) offers a more effective, stealthy, and durable approach to backdoor attacks in FL.
引用
下载
收藏
页数:13
相关论文
共 50 条
  • [11] Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
    Qin, Zeyu
    Yao, Liuyi
    Chen, Daoyuan
    Li, Yaliang
    Ding, Bolin
    Cheng, Minhao
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4743 - 4755
  • [12] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    Neural Networks, 2025, 184
  • [13] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [14] Universal adversarial backdoor attacks to fool vertical federated learning
    Chen, Peng
    Du, Xin
    Lu, Zhihui
    Chai, Hongfeng
    COMPUTERS & SECURITY, 2024, 137
  • [15] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [16] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [17] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [18] Collusive Backdoor Attacks in Federated Learning Frameworks for IoT Systems
    Alharbi, Saier
    Guo, Yifan
    Yu, Wei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19694 - 19707
  • [19] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang H.
    Mu X.
    Wang D.
    Xu Q.
    Li K.
    IEEE Internet of Things Journal, 2024, 11 (24) : 1 - 1
  • [20] Practical and General Backdoor Attacks Against Vertical Federated Learning
    Xuan, Yuexin
    Chen, Xiaojun
    Zhao, Zhendong
    Tang, Bisheng
    Dong, Ye
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 402 - 417