Making federated learning robust to adversarial attacks by learning data and model association

被引:15
|
作者
Qayyum, Adnan [1 ]
Janjua, Muhammad Umar [1 ]
Qadir, Junaid [2 ]
机构
[1] Univ Punjab, Informat Technol, Lahore, Pakistan
[2] Qatar Univ, Doha, Qatar
关键词
Federated learning; Robust FL; Adversarial ML; Label flipping attack; Robust ML;
D O I
10.1016/j.cose.2022.102827
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
One of the key challenges in federated learning (FL) is the detection of malicious parameter updates. In a typical FL setup, the presence of malicious client(s) can potentially demolish the overall training of the shared global model by influencing the aggregation process of the server. In this paper, we present a hybrid learning-based method for the detection of poisoned/malicious parameter updates from malicious clients. Furthermore, to highlight the effectiveness of the proposed method, we provide empirical evidence by evaluating the proposed method against a well-known label flipping attack on three different image classification tasks. The results suggest that our method can effectively detect and discard poisoned parameter updates without causing a significant drop in the performance of the overall learning of the FL paradigm. Our proposed method has achieved an average malicious parameters updates detection accuracy of 97.57%, 92.35%, and 89.42% for image classification task on MNIST, CIFAR, and APTOS diabetic retinopathy (DR) detection. Our method provides a performance gain of approximately 2% as compared to a recent similar state of the art method on MNIST classification and provided a comparable performance on federated extended MNIST (FEMNIST). (C) 2022 The Authors. Published by Elsevier Ltd.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] A robust analysis of adversarial attacks on federated learning environments
    Nair, Akarsh K.
    Raj, Ebin Deni
    Sahoo, Jayakrushna
    [J]. COMPUTER STANDARDS & INTERFACES, 2023, 86
  • [2] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [3] Adversarial Poisoning Attacks on Federated Learning in Metaverse
    Aristodemou, Marios
    Liu, Xiaolan
    Lambotharan, Sangarapillai
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6312 - 6317
  • [4] The Impact of Adversarial Attacks on Federated Learning: A Survey
    Kumar, Kummari Naveen
    Mohan, Chalavadi Krishna
    Cenkeramaddi, Linga Reddy
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 2672 - 2691
  • [5] Robust Federated Learning for Heterogeneous Model and Data
    Madni, Hussain Ahmad
    Umer, Rao Muhammad
    Foresti, Gian Luca
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2024, 34 (04)
  • [6] Leveraging Federated Learning & Blockchain to counter Adversarial Attacks in Incremental Learning
    Kebande, Victor R.
    Alawadi, Sadi
    Bugeja, Joseph
    Persson, Jan A.
    Olsson, Carl Magnus
    [J]. COMPANION PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON THE INTERNET OF THINGS, IOT 2020, 2020,
  • [7] Exploring Adversarial Attacks in Federated Learning for Medical Imaging
    Darzi, Erfan
    Dubost, Florian
    Sijtsema, Nanna. M.
    van Ooijen, P. M. A.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024,
  • [8] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606
  • [9] Toward Federated Learning Models Resistant to Adversarial Attacks
    Hu, Fei
    Zhou, Wuneng
    Liao, Kaili
    Li, Hongliang
    Tong, Dongbing
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (19) : 16917 - 16930
  • [10] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    [J]. PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640