IBA: Towards Irreversible Backdoor Attacks in Federated Learning

被引:0
|
作者
Dung Thuy Nguyen [1 ,2 ,5 ]
Tuan Nguyen [2 ,3 ,4 ]
Tuan Anh Tran
Doan, Khoa D. [2 ,3 ]
Wong, Kok-Seng [2 ,3 ]
机构
[1] Vanderbilt Univ, Dept Comp Sci, Nashville, TN 37212 USA
[2] VinUniv, VinUni Illinois Smart Hlth Ctr, Hanoi, Vietnam
[3] VinUniv, Coll Engn & Comp Sci, Hanoi, Vietnam
[4] VinAI Res, Hanoi, Vietnam
[5] VinUniv, Hanoi, Vietnam
关键词
TAXONOMY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) is a distributed learning approach that enables machine learning models to be trained on decentralized data without compromising end devices' personal, potentially sensitive data. However, the distributed nature and uninvestigated data intuitively introduce new security vulnerabilities, including backdoor attacks. In this scenario, an adversary implants backdoor functionality into the global model during training, which can be activated to cause the desired misbehaviors for any input with a specific adversarial pattern. Despite having remarkable success in triggering and distorting model behavior, prior backdoor attacks in FL often hold impractical assumptions, limited imperceptibility, and durability. Specifically, the adversary needs to control a sufficiently large fraction of clients or know the data distribution of other honest clients. In many cases, the trigger inserted is often visually apparent, and the backdoor effect is quickly diluted if the adversary is removed from the training process. To address these limitations, we propose a novel backdoor attack framework in FL, the Irreversible Backdoor Attack (IBA), that jointly learns the optimal and visually stealthy trigger and then gradually implants the backdoor into a global model. This approach allows the adversary to execute a backdoor attack that can evade both human and machine inspections. Additionally, we enhance the efficiency and durability of the proposed attack by selectively poisoning the model's parameters that are least likely updated by the main task's learning process and constraining the poisoned model update to the vicinity of the global model. Finally, we evaluate the proposed attack framework on several benchmark datasets, including MNIST, CIFAR-10, and Tiny ImageNet, and achieved high success rates while simultaneously bypassing existing backdoor defenses and achieving a more durable backdoor effect compared to other backdoor attacks. Overall, IBA(2) offers a more effective, stealthy, and durable approach to backdoor attacks in FL.
引用
下载
收藏
页数:13
相关论文
共 50 条
  • [21] MITDBA: Mitigating Dynamic Backdoor Attacks in Federated Learning for IoT Applications
    Wang, Yongkang
    Zhai, Di-Hua
    Han, Dongyu
    Guan, Yuyin
    Xia, Yuanqing
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06): : 10115 - 10132
  • [22] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [23] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [24] An adaptive robust defending algorithm against backdoor attacks in federated learning
    Wang, Yongkang
    Zhai, Di-Hua
    He, Yongping
    Xia, Yuanqing
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 143 : 118 - 131
  • [25] SCFL: Mitigating backdoor attacks in federated learning based on SVD and clustering 
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    COMPUTERS & SECURITY, 2023, 133
  • [26] A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning
    Zhang, Hangfan
    Jia, Jinyuan
    Chen, Jinghui
    Lin, Lu
    Wu, Dinghao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36, NEURIPS 2023, 2023,
  • [27] Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning
    Zeng, Hui
    Zhou, Tongqing
    Wu, Xinyi
    Cai, Zhiping
    2022 41ST INTERNATIONAL SYMPOSIUM ON RELIABLE DISTRIBUTED SYSTEMS (SRDS 2022), 2022, : 69 - 81
  • [28] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [29] Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective
    Qin, Zhen
    Chen, Feiyi
    Zhi, Chen
    Yan, Xueqiang
    Deng, Shuiguang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14677 - 14685
  • [30] Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers
    Gong, Xueluan
    Chen, Yanjiao
    Huang, Huayang
    Liao, Yuqing
    Wang, Shuai
    Wang, Qian
    IEEE NETWORK, 2022, 36 (01): : 84 - 90