Poisoning Attack in Federated Learning using Generative Adversarial Nets

被引:128
|
作者
Zhang, Jiale [1 ]
Chen, Junjun [2 ]
Wu, Di [3 ,4 ]
Chen, Bing [1 ]
Yu, Shui [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[3] Univ Technol Sydney, Sch Software, Sydney, NSW 2007, Australia
[4] Univ Technol Sydney, Ctr Artificial Intelligence, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; poisoning attack; generative adversarial nets; security; privacy;
D O I
10.1109/TrustCom/BigDataSE.2019.00057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
引用
收藏
页码:374 / 380
页数:7
相关论文
共 50 条
  • [31] Federated conditional generative adversarial nets imputation method for air quality missing data
    Zhou, Xu
    Liu, Xiaofeng
    Lan, Gongjin
    Wu, Jian
    KNOWLEDGE-BASED SYSTEMS, 2021, 228
  • [32] Classification of Imbalanced Dataset using Generative Adversarial Nets
    Ozmen, Emirhan
    Cogun, Fuat
    Altiparmak, Fatih
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [33] Semi-supervised Learning on Graphs with Generative Adversarial Nets
    Ding, Ming
    Tang, Jie
    Zhang, Jie
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 913 - 922
  • [34] Learning Inverse Mapping by AutoEncoder Based Generative Adversarial Nets
    Luo, Junyu
    Xu, Yong
    Tang, Chenwei
    Lv, Jiancheng
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 207 - 216
  • [35] Multiple-Task Learning and Knowledge Transfer Using Generative Adversarial Capsule Nets
    Lin, Ancheng
    Li, Jun
    Zhang, Lujuan
    Ma, Zhenyuan
    Luo, Weiqi
    AI 2018: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, 11320 : 669 - 680
  • [36] Data Imputation of Wind Turbine Using Generative Adversarial Nets with Deep Learning Models
    Qu, Fuming
    Liu, Jinhai
    Hong, Xiaowei
    Zhang, Yu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT I, 2018, 11301 : 152 - 161
  • [37] Using Generative Adversarial Nets on Atari Games for Feature Extraction in Deep Reinforcement Learning
    Aydin, Ayberk
    Surer, Elif
    2020 28TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2020,
  • [38] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Tao Liu
    Mingjun Li
    Haibin Zheng
    Zhaoyan Ming
    Jinyin Chen
    Multimedia Systems, 2023, 29 : 553 - 568
  • [39] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Liu, Tao
    Li, Mingjun
    Zheng, Haibin
    Ming, Zhaoyan
    Chen, Jinyin
    MULTIMEDIA SYSTEMS, 2023, 29 (02) : 553 - 568
  • [40] ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
    Guo, Jingjing
    Li, Haiyang
    Huang, Feiran
    Liu, Zhiquan
    Peng, Yanguo
    Li, Xinghua
    Ma, Jianfeng
    Menon, Varun G.
    Igorevich, Konstantin Kostromitin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6526 - 6536