Defending against Poisoning Backdoor Attacks on Federated Meta-learning

被引:5
|
作者
Chen, Chien-Lun [1 ]
Babakniya, Sara [1 ]
Paolieri, Marco [1 ]
Golubchik, Leana [1 ]
机构
[1] Univ Southern Calif, 941 Bloom Walk, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Federated learning; meta-learning; poisoning attacks; backdoor attacks; matching networks; attention mechanism; security and privacy; PRIVACY;
D O I
10.1145/3523062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10711 - 10719
  • [22] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 313 - 319
  • [23] Defending Against Byzantine Attacks in Quantum Federated Learning
    Xia, Qi
    Tao, Zeyi
    Li, Qun
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 145 - 152
  • [24] Revisiting Personalized Federated Learning: Robustness Against Backdoor Attacks
    Qin, Zeyu
    Yao, Liuyi
    Chen, Daoyuan
    Li, Yaliang
    Ding, Bolin
    Cheng, Minhao
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 4743 - 4755
  • [25] GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning
    Gan, Xiaoyun
    Gan, Shanyu
    Su, Taizhi
    Liu, Peng
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 606 - 612
  • [26] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    Proceedings - IEEE Congress on Cybermatics: 2023 IEEE International Conferences on Internet of Things, iThings 2023, IEEE Green Computing and Communications, GreenCom 2023, IEEE Cyber, Physical and Social Computing, CPSCom 2023 and IEEE Smart Data, SmartData 2023, 2023, : 313 - 319
  • [27] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 946 - 958
  • [28] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [29] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, 2022, : 946 - 958
  • [30] Practical and General Backdoor Attacks Against Vertical Federated Learning
    Xuan, Yuexin
    Chen, Xiaojun
    Zhao, Zhendong
    Tang, Bisheng
    Dong, Ye
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 402 - 417