Defending against Poisoning Backdoor Attacks on Federated Meta-learning

被引:5
|
作者
Chen, Chien-Lun [1 ]
Babakniya, Sara [1 ]
Paolieri, Marco [1 ]
Golubchik, Leana [1 ]
机构
[1] Univ Southern Calif, 941 Bloom Walk, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Federated learning; meta-learning; poisoning attacks; backdoor attacks; matching networks; attention mechanism; security and privacy; PRIVACY;
D O I
10.1145/3523062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.
引用
收藏
页数:25
相关论文
共 50 条
  • [31] Adaptive Backdoor Attacks Against Dataset Distillation for Federated Learning
    Chai, Ze
    Gao, Zhipeng
    Lin, Yijing
    Zhao, Chen
    Yu, Xinlei
    Xie, Zhiqiang
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 4614 - 4619
  • [32] CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
    Xie, Chulin
    Chen, Minghao
    Chen, Pin-Yu
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] FLSAD: Defending Backdoor Attacks in Federated Learning via Self-Attention Distillation
    Chen, Lucheng
    Liu, Xiaoshuang
    Wang, Ailing
    Zhai, Weiwei
    Cheng, Xiang
    SYMMETRY-BASEL, 2024, 16 (11):
  • [34] Defending Against Backdoor Attacks by Quarantine Training
    Yu, Chengxu
    Zhang, Yulai
    IEEE ACCESS, 2024, 12 : 10681 - 10689
  • [35] Unlearning Backdoor Attacks in Federated Learning
    Wu, Chen
    Zhu, Sencun
    Mitra, Prasenjit
    Wang, Wei
    2024 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS 2024, 2024,
  • [36] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [37] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [38] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [39] FederatedReverse: A Detection and Defense Method Against Backdoor Attacks in Federated Learning
    Zhao, Chen
    Wen, Yu
    Li, Shuailou
    Liu, Fucheng
    Meng, Dan
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 51 - 62
  • [40] A Federated Weighted Learning Algorithm Against Poisoning Attacks
    Yafei Ning
    Zirui Zhang
    Hu Li
    Yuhan Xia
    Ming Li
    International Journal of Computational Intelligence Systems, 18 (1)