GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems

被引:4
|
作者
Psychogyios, Konstantinos [1 ]
Velivassaki, Terpsichori-Helen [1 ]
Bourou, Stavroula [1 ]
Voulkidis, Artemis [1 ]
Skias, Dimitrios [2 ]
Zahariadis, Theodore [1 ,3 ]
机构
[1] Synelixis Solut SA, GR-34100 Chalkida, Greece
[2] Netco Intrasoft SA, GR-19002 Paiania, Greece
[3] Natl & Kapodistrian Univ Athens, Gen Dept, GR-15772 Athens, Greece
基金
欧盟地平线“2020”;
关键词
machine learning; federated learning; generative adversarial networks; data poisoning; label flipping; VULNERABILITY ASSESSMENT; TAXONOMY;
D O I
10.3390/electronics12081805
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of similar to 5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Detection and Mitigation of Targeted Data Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH), 2022, : 271 - 278
  • [2] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [3] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [4] Data Poisoning Attacks and Mitigation Strategies on Federated Support Vector Machines
    Mouri I.J.
    Ridowan M.
    Adnan M.A.
    SN Computer Science, 5 (2)
  • [5] Attacks against Federated Learning Defense Systems and their Mitigation
    Lewis, Cody
    Varadharajan, Vijay
    Noman, Nasimul
    JOURNAL OF MACHINE LEARNING RESEARCH, 2023, 24
  • [6] VagueGAN: A GAN-Based Data Poisoning Attack Against Federated Learning Systems
    Sun, Wei
    Gao, Bo
    Xiong, Ke
    Lu, Yang
    Wang, Yuwei
    2023 20TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING, SECON, 2023,
  • [7] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [8] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [9] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [10] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726