SoteriaFL: A Unified Framework for Private Federated Learning with Communication Compression

被引:0
|
作者
Li, Zhize [1 ]
Zhao, Haoyu [2 ]
Li, Boyue [1 ]
Chi, Yuejie [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Princeton Univ, Princeton, NJ USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To enable large-scale machine learning in bandwidth-hungry environments such as wireless networks, significant progress has been made recently in designing communication-efficient federated learning algorithms with the aid of communication compression. On the other end, privacy-preserving, especially at the client level, is another important desideratum that has not been addressed simultaneously in the presence of advanced communication compression techniques yet. In this paper, we propose a unified framework that enhances the communication efficiency of private federated learning with communication compression. Exploiting both general compression operators and local differential privacy, we first examine a simple algorithm that applies compression directly to differentially-private stochastic gradient descent, and identify its limitations. We then propose a unified framework SoteriaFL for private federated learning, which accommodates a general family of local gradient estimators including popular stochastic variance-reduced gradient methods and the state-of-the-art shifted compression scheme. We provide a comprehensive characterization of its performance trade-offs in terms of privacy, utility, and communication complexity, where SoteriaFL is shown to achieve better communication complexity without sacrificing privacy nor utility than other private federated learning algorithms without communication compression.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] An Adaptive Compression and Communication Framework for Wireless Federated Learning
    Yang Y.
    Dang S.
    Zhang Z.
    [J]. IEEE Transactions on Mobile Computing, 2024, 23 (12) : 1 - 18
  • [2] Compression Boosts Differentially Private Federated Learning
    Kerkouche, Raouf
    Acs, Gergely
    Castelluccia, Claude
    Geneves, Pierre
    [J]. 2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021), 2021, : 304 - 318
  • [3] Federated Learning with Compression: Unified Analysis and Sharp Guarantees
    Haddadpour, Farzin
    Kamani, Mohammad Mahdi
    Mokhtari, Aryan
    Mahdavi, Mehrdad
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [4] Model Compression for Communication Efficient Federated Learning
    Shah, Suhail Mohmad
    Lau, Vincent K. N.
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5937 - 5951
  • [5] A unified framework for multi-modal federated learning
    Xiong, Baochen
    Yang, Xiaoshan
    Qi, Fan
    Xu, Changsheng
    [J]. NEUROCOMPUTING, 2022, 480 : 110 - 118
  • [6] Differentially private federated learning framework with adaptive clipping
    Wang F.
    Xie M.
    Li Q.
    Wang C.
    [J]. Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2023, 50 (04): : 111 - 112
  • [7] Ternary Compression for Communication-Efficient Federated Learning
    Xu, Jinjin
    Du, Wenli
    Jin, Yaochu
    He, Wangli
    Cheng, Ran
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 1162 - 1176
  • [8] Model compression and privacy preserving framework for federated learning
    Zhu, Xi
    Wang, Junbo
    Chen, Wuhui
    Sato, Kento
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 140 : 376 - 389
  • [9] A unified Personalized Federated Learning framework ensuring Domain Generalization
    Liu, Yuan
    Qu, Zhe
    Wang, Shu
    Shen, Chengchao
    Liang, Yixiong
    Wang, Jianxin
    [J]. Expert Systems with Applications, 2025, 263
  • [10] A Joint Communication and Learning Framework for Hierarchical Split Federated Learning
    Khan, Latif U.
    Guizani, Mohsen
    Al-Fuqaha, Ala
    Hong, Choong Seon
    Niyato, Dusit
    Han, Zhu
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01) : 268 - 282