GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems

被引:4
|
作者
Psychogyios, Konstantinos [1 ]
Velivassaki, Terpsichori-Helen [1 ]
Bourou, Stavroula [1 ]
Voulkidis, Artemis [1 ]
Skias, Dimitrios [2 ]
Zahariadis, Theodore [1 ,3 ]
机构
[1] Synelixis Solut SA, GR-34100 Chalkida, Greece
[2] Netco Intrasoft SA, GR-19002 Paiania, Greece
[3] Natl & Kapodistrian Univ Athens, Gen Dept, GR-15772 Athens, Greece
基金
欧盟地平线“2020”;
关键词
machine learning; federated learning; generative adversarial networks; data poisoning; label flipping; VULNERABILITY ASSESSMENT; TAXONOMY;
D O I
10.3390/electronics12081805
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of similar to 5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] A Federated Learning Framework against Data Poisoning Attacks on the Basis of the Genetic Algorithm
    Zhai, Ran
    Chen, Xuebin
    Pei, Langtao
    Ma, Zheng
    ELECTRONICS, 2023, 12 (03)
  • [22] Mitigating Data Poisoning Attacks On a Federated Learning-Edge Computing Network
    Doku, Ronald
    Rawat, Danda B.
    2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,
  • [23] Decentralized Defense: Leveraging Blockchain against Poisoning Attacks in Federated Learning Systems
    Thennakoon, Rashmi
    Wanigasundara, Arosha
    Weerasinghe, Sanjaya
    Seneviratne, Chatura
    Siriwardhana, Yushan
    Liyanage, Madhusanka
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 950 - 955
  • [24] PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems
    Zhang, Jiale
    Chen, Bing
    Cheng, Xiang
    Huynh Thi Thanh Binh
    Yu, Shui
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) : 3310 - 3322
  • [25] RobustFL: Robust Federated Learning Against Poisoning Attacks in Industrial IoT Systems
    Zhang, Jiale
    Ge, Chunpeng
    Hu, Feng
    Chen, Bing
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6388 - 6397
  • [26] GAN-Driven Anomaly Detection for Active Learning in Medical Imaging Segmentation
    Woodland, M.
    Patel, A.
    Anderson, B.
    Lin, E.
    Koay, E.
    Odisio, B.
    Brock, K.
    MEDICAL PHYSICS, 2021, 48 (06)
  • [27] A Federated Weighted Learning Algorithm Against Poisoning Attacks
    Yafei Ning
    Zirui Zhang
    Hu Li
    Yuhan Xia
    Ming Li
    International Journal of Computational Intelligence Systems, 18 (1)
  • [28] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [29] Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
    Alkhunaizi, Naif
    Kamzolov, Dmitry
    Takac, Martin
    Nandakumar, Karthik
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 : 673 - 683
  • [30] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207