Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity

被引:0
|
作者
Walter, Kane [1 ]
Mohammady, Meisam [2 ]
Nepal, Surya [3 ]
Kanhere, Salil S. [1 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] Iowa State Univ, Ames, IA USA
[3] CSIRO, Data61, Sydney, NSW, Australia
关键词
Federated Learning; Backdoor Attack; Mode Connectivity;
D O I
10.1145/3634737.3637682
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a privacy-preserving, collaborative machine learning technique where multiple clients train a shared model on their private datasets without sharing the data. While offering advantages, FL is susceptible to backdoor attacks, where attackers insert malicious model updates into the model aggregation process. Compromised models predict attacker-chosen targets when presented with specific attacker-defined inputs. Backdoor defences generally rely on anomaly detection techniques based on Differential Privacy (DP) or require legitimate clean test examples at the server. Anomaly detection-based defences can be defeated by stealth techniques and generally require inspection of clientsubmitted model updates. DP-based approaches tend to degrade the performance of the trained model due to excessive noise addition during training. Methods that require legitimate clean data on the server require strong assumptions about the task and may not be applicable in real-world settings. In this work, we view the question of backdoor attack robustness through the lens of loss function optimal points to build a defence that overcomes these limitations. We propose Mode Connectivity Based Federated Learning (MCFL), which leverages the recently discovered property of neural network loss surfaces, mode connectivity. We simulate backdoor attack scenarios using computer vision benchmark datasets, including CIFAR10, Fashion MNIST, MNIST, and Federated EMNIST. Our findings show that MCFL converges to high-quality models and effectively mitigates backdoor attacks relative to baseline defences from the literature without requiring inspection of client model updates or assuming clean data at the server.
引用
收藏
页码:1287 / 1298
页数:12
相关论文
共 50 条
  • [21] Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
    Guo J.-J.
    Liu J.-Z.
    Ma Y.
    Liu Z.-Q.
    Xiong Y.-P.
    Miao K.
    Li J.-X.
    Ma J.-F.
    [J]. Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (03): : 662 - 676
  • [22] Federated Learning Backdoor Attack Scheme Based on Generative Adversarial Network
    Chen D.
    Fu A.
    Zhou C.
    Chen Z.
    [J]. Fu, Anmin (fuam@njust.edu.cn); Fu, Anmin (fuam@njust.edu.cn), 1600, Science Press (58): : 2364 - 2373
  • [23] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    [J]. 2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [24] How To Backdoor Federated Learning
    Bagdasaryan, Eugene
    Veit, Andreas
    Hua, Yiqing
    Estrin, Deborah
    Shmatikov, Vitaly
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2938 - 2947
  • [25] BADFL: Backdoor Attack Defense in Federated Learning From Local Model Perspective
    Zhang, Haiyan
    Li, Xinghua
    Xu, Mengfan
    Liu, Ximeng
    Wu, Tong
    Weng, Jian
    Deng, Robert H.
    [J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (11) : 5661 - 5674
  • [26] A Practical Clean -Label Backdoor Attack with Limited Information in Vertical Federated Learning
    Chen, Peng
    Yang, Jirui
    Lin, Junxiong
    Lu, Zhihui
    Duan, Qiang
    Chai, Hongfeng
    [J]. 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023, 2023, : 41 - 50
  • [27] Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning
    Yang, Deshan
    Luo, Senlin
    Zhou, Jinjie
    Pan, Limin
    Yang, Xiaonan
    Xing, Jiyuan
    [J]. INFORMATION SCIENCES, 2023, 651
  • [28] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Liu, Tao
    Li, Mingjun
    Zheng, Haibin
    Ming, Zhaoyan
    Chen, Jinyin
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (02) : 553 - 568
  • [29] Breaking Distributed Backdoor Defenses for Federated Learning in Non-IID Settings
    Yang, Jijia
    Shu, Jiangang
    Jia, Xiaohua
    [J]. 2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 347 - 354
  • [30] Resisting Distributed Backdoor Attacks in Federated Learning: A Dynamic Norm Clipping Approach
    Guo, Yifan
    Wang, Qianlong
    Ji, Tianxi
    Wang, Xufei
    Li, Pan
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 1172 - 1182