Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [1] FedMC: Federated Learning with Mode Connectivity Against Distributed Backdoor Attacks
    Wang, Weiqi
    Zhang, Chenhan
    Liu, Shushu
    Tang, Mingjian
    Liu, An
    Yu, Shui
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 4873 - 4878
  • [2] Distributed Swift and Stealthy Backdoor Attack on Federated Learning
    Sundar, Agnideven Palanisamy
    Li, Feng
    Zou, Xukai
    Gao, Tianchong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2022, : 193 - 200
  • [3] DAGUARD: distributed backdoor attack defense scheme under federated learning
    Yu S.
    Chen Z.
    Chen Z.
    Liu X.
    [J]. Tongxin Xuebao/Journal on Communications, 2023, 44 (05): : 110 - 122
  • [4] Optimally Mitigating Backdoor Attacks in Federated Learning
    Walter, Kane
    Mohammady, Meisam
    Nepal, Surya
    Kanhere, Salil S.
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 2949 - 2963
  • [6] Mitigating the Backdoor Attack by Federated Filters for Industrial IoT Applications
    Hou, Boyu
    Gao, Jiqiang
    Guo, Xiaojie
    Baker, Thar
    Zhang, Ying
    Wen, Yanlong
    Liu, Zheli
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (05) : 3562 - 3571
  • [7] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    [J]. COMPUTERS & SECURITY, 2022, 121
  • [8] Shadow backdoor attack: Multi-intensity backdoor attack against federated learning
    Ren, Qixian
    Zheng, Yu
    Yang, Chao
    Li, Yue
    Ma, Jianfeng
    [J]. COMPUTERS & SECURITY, 2024, 139
  • [9] Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Abad, Gorka
    Paguada, Servio
    Ersoy, Oguzhan
    Picek, Stjepan
    Ramirez-Duran, Victor Julio
    Urbieta, Aitor
    [J]. 2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 377 - 391
  • [10] Mitigating Poisoning Attack in Federated Learning
    Uprety, Aashma
    Rawat, Danda B.
    [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,