VPPFL: A verifiable privacy-preserving federated learning scheme against poisoning attacks

被引:1
|
作者
Huang, Yuxian [1 ]
Yang, Geng [1 ]
Zhou, Hao [1 ]
Dai, Hua [1 ]
Yuan, Dong [2 ]
Yu, Shui [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Comp Sci & Technol, Nanjing 210003, Jiangsu, Peoples R China
[2] Univ Sydney, Sch Elect & Informat Engn, Sydney, NSW 2006, Australia
[3] Univ Technol Sydney, Sch Comp Sci, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; Poisoning attacks; Differential privacy; Privacy-preserving; Defense strategy; SECURE;
D O I
10.1016/j.cose.2023.103562
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) allows users to train a global model without sharing original data, enabling data to be available and invisible. However, not all users are benign and malicious users can corrupt the global model by uploading poisonous parameters. Compared with other machine learning schemes, two reasons make it easier for poisoning attacks to succeed in FL: 1) Malicious users can directly poison the parameters, which is more efficient than data poisoning; 2) Privacy preserving techniques, such as homomorphic encryption (HE) or differential privacy (DP), give poisonous parameters a cover, which makes it difficult for the server to detect outliers. To solve such a dilemma, in this paper, we propose VPPFL, a verifiable privacy-preserving federated learning scheme (VPPFL) with DP as the underlying technology. The VPPFL can defend against poisoning attacks and protect users' privacy with small computation and communication cost. Specifically, we design a verification mechanism, which can verify parameters that are perturbed by DP Noise, thus finding out poisonous parameters. In addition, we provide comprehensive analysis from the perspectives of security, convergence and complexity. Extensive experiments show that our scheme maintains the detection capability compared to prior works, but it only needs 15%-30% computation cost and 7%-14% communication cost.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] DefendFL: A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks
    Liu, Jiao
    Li, Xinghua
    Liu, Ximeng
    Zhang, Haiyan
    Miao, Yinbin
    Deng, Robert H.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [2] A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks in Smart Grid
    Li, Xiumin
    Wen, Mi
    He, Siying
    Lu, Rongxing
    Wang, Liangliang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16805 - 16816
  • [3] A Privacy-Preserving and Verifiable Federated Learning Scheme
    Zhang, Xianglong
    Fu, Anmin
    Wang, Huaqun
    Zhou, Chunyi
    Chen, Zhenzhu
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [4] A survey on privacy-preserving federated learning against poisoning attacks
    Xia, Feng
    Cheng, Wenhao
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 13565 - 13582
  • [5] An efficient privacy-preserving and verifiable scheme for federated learning
    Yang, Xue
    Ma, Minjie
    Tang, Xiaohu
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 160 : 238 - 250
  • [6] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,
  • [7] FVFL: A Flexible and Verifiable Privacy-Preserving Federated Learning Scheme
    Wang, Gang
    Zhou, Li
    Li, Qingming
    Yan, Xiaoran
    Liu, Ximeng
    Wu, Yuncheng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (13): : 23268 - 23281
  • [8] A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
    Yazdinejad, Abbas
    Dehghantanha, Ali
    Karimipour, Hadis
    Srivastava, Gautam
    Parizi, Reza M.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6693 - 6708
  • [9] Robust and privacy-preserving federated learning with distributed additive encryption against poisoning attacks
    Zhang, Fan
    Huang, Hui
    Chen, Zhixiong
    Huang, Zhenjie
    COMPUTER NETWORKS, 2024, 245
  • [10] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654