Defending against Membership Inference Attacks in Federated learning via Adversarial Example

被引:5
|
作者
Xie, Yuanyuan [1 ]
Chen, Bing [1 ]
Zhang, Jiale [2 ]
Wu, Di [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Sch Comp Sci & Technol, Nanjing, Peoples R China
[2] Yangzhou Univ, Sch Informat Engn, Yangzhou, Jiangsu, Peoples R China
[3] Deakin Univ, Sch Informat Technol, Melbourne, Vic, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; Membership inference attacks; adversarial example;
D O I
10.1109/MSN53354.2021.00036
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning has attracted attention in recent years due to its native privacy-preserving features. However, it is still vulnerable to various membership inference attacks, such as backdoor, poisoning, and adversarial attacks. Membership Inference attack aims to discover the data used to train the model, which leads to privacy leaking ramifications on participants who use their local data to train the shared model. Recent research on countermeasure methods mainly focuses on protecting the parameters and has limitations in guaranteeing privacy while restraining the loss of the model. This paper proposes Fedefend, which applies adversarial examples to defend against membership inference attacks in federated learning. The proposed approach adds well-designed noise to the attack features of the target model of each iteration becomes an adversarial example. In addition, we also consider the utility loss of the model and use an adversarial method to generate noise to constrain the loss to a certain extent, which efficiently achieves a trade-off between privacy security and loss of the federated learning model. We evaluate the proposed Fedefend on two benchmark datasets, and the experimental results demonstrate that Fedefend has a good performance.
引用
收藏
页码:153 / 160
页数:8
相关论文
共 50 条
  • [1] MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
    Jia, Jinyuan
    Salem, Ahmed
    Backes, Michael
    Zhang, Yang
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 259 - 274
  • [2] Defending against Adversarial Attacks in Federated Learning on Metric Learning Model
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    He, Liangzhong
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 197 - 206
  • [3] Efficient Membership Inference Attacks against Federated Learning via Bias Differences
    Zhang, Liwei
    Li, Linghui
    Li, Xiaoyong
    Cai, Binsi
    Gao, Yali
    Dou, Ruobin
    Chen, Luying
    [J]. PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 222 - 235
  • [4] TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning
    Liu, Gaoyang
    Tian, Zehao
    Chen, Jian
    Wang, Chen
    Liu, Jiangchuan
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 (4996-5010) : 4996 - 5010
  • [5] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [6] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    [J]. COMPUTERS & SECURITY, 2024, 136
  • [7] Defending Against Membership Inference Attacks on Beacon Services
    Venkatesaramani, Rajagopal
    Wan, Zhiyu
    Malin, Bradley A.
    Vorobeychik, Yevgeniy
    [J]. ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (03)
  • [8] Defending against membership inference attacks: RM Learning is all you need
    Zhang, Zheng
    Ma, Jianfeng
    Ma, Xindi
    Yang, Ruikang
    Wang, Xiangyu
    Zhang, Junying
    [J]. INFORMATION SCIENCES, 2024, 670
  • [9] Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation
    Yang, Yuchen
    Yuan, Haolin
    Hui, Bo
    Gong, Neil
    Fendley, Neil
    Burlina, Philippe
    Cao, Yinzhi
    [J]. 2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS, DSN, 2023, : 288 - 301
  • [10] FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning
    Yang, Zilu
    Zhao, Yanchao
    Zhang, Jiale
    [J]. WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 364 - 378