Poisoning Attack in Federated Learning using Generative Adversarial Nets

被引:128
|
作者
Zhang, Jiale [1 ]
Chen, Junjun [2 ]
Wu, Di [3 ,4 ]
Chen, Bing [1 ]
Yu, Shui [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[3] Univ Technol Sydney, Sch Software, Sydney, NSW 2007, Australia
[4] Univ Technol Sydney, Ctr Artificial Intelligence, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; poisoning attack; generative adversarial nets; security; privacy;
D O I
10.1109/TrustCom/BigDataSE.2019.00057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
引用
收藏
页码:374 / 380
页数:7
相关论文
共 50 条
  • [41] FLAIR: Defense against Model Poisoning Attack in Federated Learning
    Sharma, Atul
    Chen, Wei
    Zhao, Joshua
    Qiu, Qiang
    Bagchi, Saurabh
    Chaterji, Somali
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 553 - +
  • [42] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [43] Attack-Resilient Connectivity Game for UAV Networks using Generative Adversarial Learning
    Yang, Bo
    Liu, Min
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1743 - 1751
  • [44] Generative Transferable Adversarial Attack
    Li, Yifeng
    Zhang, Ya
    Zhang, Rui
    Wang, Yanfeng
    ICVIP 2019: PROCEEDINGS OF 2019 3RD INTERNATIONAL CONFERENCE ON VIDEO AND IMAGE PROCESSING, 2019, : 84 - 89
  • [45] Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks
    Sun, Yuwei
    Chong, Ng S. T.
    Ochiai, Hideya
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 2749 - 2754
  • [46] Federated Traffic Synthesizing and Classification Using Generative Adversarial Networks
    Xu, Chenxin
    Xia, Rong
    Xiao, Yong
    Li, Yingyu
    Shi, Guangming
    Chen, Kwang-cheng
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [47] Multi-Modal Imitation Learning from Unstructured Demonstrations using Generative Adversarial Nets
    Hausman, Karol
    Chebotar, Yevgen
    Schaal, Stefan
    Sukhatme, Gaurav
    Lim, Joseph J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [48] DAG-GAN: CAUSAL STRUCTURE LEARNING WITH GENERATIVE ADVERSARIAL NETS
    Gao, Yinghua
    Shen, Li
    Xia, Shu-Tao
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3320 - 3324
  • [49] InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
    Chen, Xi
    Duan, Yan
    Houthooft, Rein
    Schulman, John
    Sutskever, Ilya
    Abbeel, Pieter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [50] ABNORMAL EVENT DETECTION IN VIDEOS USING GENERATIVE ADVERSARIAL NETS
    Ravanbakhsh, Mahdyar
    Nabi, Moin
    Sangineto, Enver
    Marcenaro, Lucio
    Regazzoni, Carlo
    Sebe, Nicu
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1577 - 1581