Privacy-Preserving Federated Deep Learning With Irregular Users

被引:84
|
作者
Xu, Guowen [1 ,2 ]
Li, Hongwei [1 ,2 ]
Zhang, Yun [1 ]
Xu, Shengmin [3 ]
Ning, Jianting [3 ,4 ]
Deng, Robert H. [3 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
[2] Cyberspace Secur Res Ctr, Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Singapore Management Univ, Sch Informat Syst, Singapore 178902, Singapore
[4] Fujian Normal Univ, Coll Math & Informat, Fuzhou 350007, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Training; Servers; Deep learning; Privacy; Cryptography; Neural networks; Privacy protection; federated learning; cloud computing; ACCESS-CONTROL; EFFICIENT; SECURITY; AWARE;
D O I
10.1109/TDSC.2020.3005909
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated deep learning has been widely used in various fields. To protect data privacy, many privacy-preservingapproaches have been designed and implemented in various scenarios. However, existing works rarely consider a fundamental issue that the data shared by certain users (called irregular users) may be of low quality. Obviously, in a federated training process, data shared by many irregular users may impair the training accuracy, or worse, lead to the uselessness of the final model. In this article, we propose PPFDL, a Privacy-Preserving Federated Deep Learning framework with irregular users. In specific, we design a novel solution to reduce the negative impact of irregular users on the training accuracy, which guarantees that the training results are mainly calculated from the contribution of high-quality data. Meanwhile, we exploit Yao's garbled circuits and additively homomorphic cryptosystems to ensure the confidentiality of all user-related information. Moreover, PPFDL is also robust to users dropping out during the whole implementation. This means that each user can be offline at any subprocess of training, as long as the remaining online users can still complete the training task. Extensive experiments demonstrate the superior performance of PPFDL in terms of training accuracy, computation, and communication overheads.
引用
收藏
页码:1364 / 1381
页数:18
相关论文
共 50 条
  • [31] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    [J]. IEEE ACCESS, 2021, 9 : 149477 - 149485
  • [32] Privacy-Preserving Decentralized Aggregation for Federated Learning
    Jeon, Beomyeol
    Ferdous, S. M.
    Rahmant, Muntasir Raihan
    Walid, Anwar
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [33] A Privacy-Preserving and Verifiable Federated Learning Scheme
    Zhang, Xianglong
    Fu, Anmin
    Wang, Huaqun
    Zhou, Chunyi
    Chen, Zhenzhu
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [34] PRIVACY-PRESERVING SERVICES USING FEDERATED LEARNING
    Taylor, Paul
    Kiss, Stephanie
    Gullon, Lucy
    Yearling, David
    [J]. Journal of the Institute of Telecommunications Professionals, 2022, 16 : 16 - 22
  • [35] Measuring Contributions in Privacy-Preserving Federated Learning
    Pejo, Balazs
    Biczok, Gergely
    Acs, Gergely
    [J]. ERCIM NEWS, 2021, (126): : 35 - 36
  • [36] Privacy-Preserving and Reliable Distributed Federated Learning
    Dong, Yipeng
    Zhang, Lei
    Xu, Lin
    [J]. ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 130 - 149
  • [37] Privacy-Preserving Federated Learning in Fog Computing
    Zhou, Chunyi
    Fu, Anmin
    Yu, Shui
    Yang, Wei
    Wang, Huaqun
    Zhang, Yuqing
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 10782 - 10793
  • [38] GAIN: Decentralized Privacy-Preserving Federated Learning
    Jiang, Changsong
    Xu, Chunxiang
    Cao, Chenchen
    Chen, Kefei
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 78
  • [39] Privacy-Preserving Federated Learning via Disentanglement
    Zhou, Wenjie
    Li, Piji
    Han, Zhaoyang
    Lu, Xiaozhen
    Li, Juan
    Ren, Zhaochun
    Liu, Zhe
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3606 - 3615
  • [40] AN EXPLORATION OF FEDERATED LEARNING FOR PRIVACY-PRESERVING MACHINE LEARNING
    Kumar, K. Kiran
    Rao, Thalakola Syamsundara
    Vullam, Nagagopiraju
    Vellela, Sai Srinivas
    Jyosthna, B.
    Farjana, Shaik
    Javvadi, Sravanthi
    [J]. 2024 5TH INTERNATIONAL CONFERENCE ON INNOVATIVE TRENDS IN INFORMATION TECHNOLOGY, ICITIIT 2024, 2024,