Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation

被引:2
|
作者
Yang, Yuchen [1 ]
Yuan, Haolin [1 ]
Hui, Bo [1 ]
Gong, Neil [2 ]
Fendley, Neil [1 ,3 ]
Burlina, Philippe [3 ]
Cao, Yinzhi [1 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD USA
[2] Duke Univ, Durham, NC USA
[3] Johns Hopkins Appl Phys Lab, Laurel, MD USA
基金
美国国家科学基金会;
关键词
RISK;
D O I
10.1109/DSN58367.2023.00037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training process. However, output perturbations are ineffective in an FL setting, because a malicious server can access the model without output perturbation while training perturbations struggle to achieve a good utility. This paper proposes a novel defense, called CIP, to fortify FL against MI attacks via a client-level input perturbation during training and inference procedures. The key insight is to shift each client's local data distribution via a personalized perturbation to get a shifted model. CIP achieves a good balance between privacy and utility. Our evaluation shows that CIP causes accuracy to drop at most 0.7% while reducing attacks to random guessing.
引用
收藏
页码:288 / 301
页数:14
相关论文
共 50 条
  • [1] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [2] Efficient Membership Inference Attacks against Federated Learning via Bias Differences
    Zhang, Liwei
    Li, Linghui
    Li, Xiaoyong
    Cai, Binsi
    Gao, Yali
    Dou, Ruobin
    Chen, Luying
    PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 222 - 235
  • [3] LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
    Ma, Mengyao
    Zhang, Yanjun
    Chamikara, M. A. P.
    Zhang, Leo Yu
    Chhetri, Mohan Baruwal
    Bai, Guangdong
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 122 - 135
  • [4] Binary Federated Learning with Client-Level Differential Privacy
    Liu, Lumin
    Zhang, Jun
    Song, Shenghui
    Letaief, Khaled B.
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3849 - 3854
  • [5] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [6] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [7] Federated Learning With Sparsified Model Perturbation: Improving Accuracy Under Client-Level Differential Privacy
    Hu, Rui
    Guo, Yuanxiong
    Gong, Yanmin
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (08) : 8242 - 8255
  • [8] FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning
    Yang, Zilu
    Zhao, Yanchao
    Zhang, Jiale
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 364 - 378
  • [9] CMI: Client-Targeted Membership Inference in Federated Learning
    Zheng, Tianhang
    Li, Baochun
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 4122 - 4132
  • [10] Flexible Clustered Federated Learning for Client-Level Data Distribution Shift
    Duan, Moming
    Liu, Duo
    Ji, Xinyuan
    Wu, Yu
    Liang, Liang
    Chen, Xianzhang
    Tan, Yujuan
    Ren, Ao
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2661 - 2674