Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [41] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    [J]. 2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [42] Towards defending adaptive backdoor attacks in Federated Learning
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5078 - 5084
  • [43] One-Shot Backdoor Removal for Federated Learning
    Pan, Zijie
    Ying, Zuobin
    Wang, Yajie
    Zhang, Chuan
    Li, Chunhai
    Zhu, Liehuang
    [J]. IEEE Internet of Things Journal, 2024, 11 (23) : 37718 - 37730
  • [44] SAFELearning: Secure Aggregation in Federated Learning With Backdoor Detectability
    Zhang, Zhuosheng
    Li, Jiarui
    Yu, Shucheng
    Makaya, Christian
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 3289 - 3304
  • [45] Efficient and Secure Federated Learning Against Backdoor Attacks
    Miao, Yinbin
    Xie, Rongpeng
    Li, Xinghua
    Liu, Zhiquan
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4619 - 4636
  • [46] Towards Practical Backdoor Attacks on Federated Learning Systems
    Shi C.
    Ji S.
    Pan X.
    Zhang X.
    Zhang M.
    Yang M.
    Zhou J.
    Yin J.
    Wang T.
    [J]. IEEE Transactions on Dependable and Secure Computing, 2024, 21 (06) : 1 - 16
  • [47] IBA: Towards Irreversible Backdoor Attacks in Federated Learning
    Dung Thuy Nguyen
    Tuan Nguyen
    Tuan Anh Tran
    Doan, Khoa D.
    Wong, Kok-Seng
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [48] Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios
    Mei, Haochen
    Li, Gaolei
    Wu, Jun
    Zheng, Longfei
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [49] BapFL: You Can Backdoor Personalized Federated Learning
    Ye, Tiandi
    Chen, Cen
    Wang, Yinggui
    Li, Xiang
    Gao, Ming
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (07)
  • [50] Genetic Algorithm-Based Dynamic Backdoor Attack on Federated Learning-Based Network Traffic Classification
    Nazzal, Mahmoud
    Aljaafari, Nura
    Sawalmeh, Ahmed
    Khreishah, Abdallah
    Anan, Muhammad
    Algosaibi, Abdulelah
    Alnaeem, Mohammed
    Aldalbahi, Adel
    Alhumam, Abdulaziz
    Vizcarra, Conrado P.
    Alhamed, Shadan
    [J]. 2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 204 - 209