Defending against Poisoning Backdoor Attacks on Federated Meta-learning

被引:5
|
作者
Chen, Chien-Lun [1 ]
Babakniya, Sara [1 ]
Paolieri, Marco [1 ]
Golubchik, Leana [1 ]
机构
[1] Univ Southern Calif, 941 Bloom Walk, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Federated learning; meta-learning; poisoning attacks; backdoor attacks; matching networks; attention mechanism; security and privacy; PRIVACY;
D O I
10.1145/3523062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [42] Defending against Backdoor Attacks in Natural Language Generation
    Sun, Xiaofei
    Li, Xiaoya
    Meng, Yuxian
    Ao, Xiang
    Lyu, Lingjuan
    Li, Jiwei
    Zhang, Tianwei
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 5257 - 5265
  • [43] Backdoor Federated Learning by Poisoning Key Parameters
    Song, Xuan
    Li, Huibin
    Hu, Kailang
    Zai, Guangjun
    ELECTRONICS, 2025, 14 (01):
  • [44] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [45] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [46] BadVFL: Backdoor Attacks in Vertical Federated Learning
    Naseri, Mohammad
    Han, Yufei
    De Cristofaro, Emiliano
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2013 - 2028
  • [47] Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers
    Gong, Xueluan
    Chen, Yanjiao
    Huang, Huayang
    Liao, Yuqing
    Wang, Shuai
    Wang, Qian
    IEEE NETWORK, 2022, 36 (01): : 84 - 90
  • [48] HashVFL: Defending Against Data Reconstruction Attacks in Vertical Federated Learning
    Qiu, Pengyu
    Zhang, Xuhong
    Ji, Shouling
    Fu, Chong
    Yang, Xing
    Wang, Ting
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3435 - 3450
  • [49] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)