Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization

被引:0
|
作者
Hu, Rui [1 ]
Gong, Yanmin [1 ]
Guo, Yuanxiong [1 ]
机构
[1] Univ Texas San Antonio, San Antonio, TX 78249 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
引用
收藏
页码:1463 / 1469
页数:7
相关论文
共 50 条
  • [1] Gradient sparsification for efficient wireless federated learning with differential privacy
    Kang Wei
    Jun Li
    Chuan Ma
    Ming Ding
    Feng Shu
    Haitao Zhao
    Wen Chen
    Hongbo Zhu
    [J]. Science China Information Sciences, 2024, 67
  • [2] Gradient sparsification for efficient wireless federated learning with differential privacy
    Kang WEI
    Jun LI
    Chuan MA
    Ming DING
    Feng SHU
    Haitao ZHAO
    Wen CHEN
    Hongbo ZHU
    [J]. Science China(Information Sciences), 2024, 67 (04) : 272 - 288
  • [3] Gradient sparsification for efficient wireless federated learning with differential privacy
    Wei, Kang
    Li, Jun
    Ma, Chuan
    Ding, Ming
    Shu, Feng
    Zhao, Haitao
    Chen, Wen
    Zhu, Hongbo
    [J]. SCIENCE CHINA-INFORMATION SCIENCES, 2024, 67 (04)
  • [4] Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
    Han, Pengchao
    Wang, Shiqiang
    Leung, Kin K.
    [J]. 2020 IEEE 40TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS), 2020, : 300 - 310
  • [5] Adaptive Sparsification and Quantization for Enhanced Energy Efficiency in Federated Learning
    Marnissi, Ouiame
    El Hammouti, Hajar
    Bergou, El Houcine
    [J]. IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 4307 - 4321
  • [6] Energy-Efficient and Adaptive Gradient Sparsification for Federated Learning
    Vaishnav, Shubham
    Efthymiou, Maria
    Magnusson, Sindri
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1256 - 1261
  • [7] Rate-Privacy-Storage Tradeoff in Federated Learning with Top r Sparsification
    Vithana, Sajani
    Ulukus, Sennur
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3127 - 3132
  • [8] Adaptive privacy-preserving federated learning
    Liu, Xiaoyuan
    Li, Hongwei
    Xu, Guowen
    Lu, Rongxing
    He, Miao
    [J]. PEER-TO-PEER NETWORKING AND APPLICATIONS, 2020, 13 (06) : 2356 - 2366
  • [9] Adaptive privacy-preserving federated learning
    Xiaoyuan Liu
    Hongwei Li
    Guowen Xu
    Rongxing Lu
    Miao He
    [J]. Peer-to-Peer Networking and Applications, 2020, 13 : 2356 - 2366
  • [10] Private Federated Submodel Learning with Sparsification
    Vithana, Sajani
    Ulukus, Sennur
    [J]. 2022 IEEE INFORMATION THEORY WORKSHOP (ITW), 2022, : 410 - 415