Secure Federated Learning with Model Compression

被引:1
|
作者
Ding, Yahao [1 ]
Shikh-Bahaei, Mohammad [1 ]
Huang, Chongwen [2 ]
Yuan, Weijie [3 ]
机构
[1] Kings Coll London, London, England
[2] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[3] Southern Univ Sci & Technol, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning (FL); deep leakage from gradients (DLG); resource block (RB) allocation;
D O I
10.1109/ICCWORKSHOPS57953.2023.10283697
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although federated Learning (FL) has become very popular recently, FL is vulnerable to gradient leakage attacks. Recent studies have shown that clients' private data can be reconstructed from shared models or gradients by attackers. Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakage, such as differential privacy (DP) and homomorphic encryption. However, these defenses may cause an increase of computation and communication costs or degrade the performance of FL, and do not consider the impact of wireless network resources on the FL training process. Herein, we propose a defense method, weight compression, to prevent gradient leakage attacks for FL over wireless networks. The gradient compression matrix is determined by the user's location and channel conditions. Moreover, we also add Gaussian noise to the compressed gradients to strengthen the defense. This joint learning, wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function. To find the solution, we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence. Then, we seek the optimal resource block (RB) allocation by exhaustive search or ant colony optimization (ACO), and then use CVX toolbox to obtain the optimal weight matrix to minimize the optimization function. Our simulation results show that the optimized RB can accelerate the convergence of FL.
引用
收藏
页码:843 / 848
页数:6
相关论文
共 50 条
  • [1] Secure Federated Learning over Wireless Communication Networks with Model Compression
    DING Yahao
    Mohammad SHIKH-BAHAEI
    YANG Zhaohui
    HUANG Chongwen
    YUAN Weijie
    ZTE Communications, 2023, 21 (01) : 46 - 54
  • [2] Model Compression for Communication Efficient Federated Learning
    Shah, Suhail Mohmad
    Lau, Vincent K. N.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (09) : 5937 - 5951
  • [3] A Secure Aggregation Scheme for Model Update in Federated Learning
    Wang, Baolin
    Hu, Chunqiang
    Liu, Zewei
    WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS (WASA 2022), PT I, 2022, 13471 : 500 - 512
  • [4] A Hybrid Federated Learning Architecture With Online Learning and Model Compression
    Odeyomi, Olusola T.
    Ajibuwa, Opeyemi
    Roy, Kaushik
    IEEE ACCESS, 2024, 12 : 191046 - 191058
  • [5] Model compression and privacy preserving framework for federated learning
    Zhu, Xi
    Wang, Junbo
    Chen, Wuhui
    Sato, Kento
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 140 : 376 - 389
  • [6] Federated learning secure model: A framework for malicious clients detection
    Kolasa, Dominik
    Pilch, Kinga
    Mazurczyk, Wojciech
    SOFTWAREX, 2024, 27
  • [7] Towards Efficient Secure Aggregation for Model Update in Federated Learning
    Wu, Danye
    Pan, Miao
    Xu, Zhiwei
    Zhang, Yujun
    Han, Zhu
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [8] A Survey of Model Compression and Its Feedback Mechanism in Federated Learning
    Le, Duy-Dong
    Tran, Anh-Khoa
    Pham, The-Bao
    Huynh, Tuong-Nguyen
    PROCEEDINGS OF THE 5TH ACM WORKSHOP ON INTELLIGENT CROSS-DATA ANALYSIS AND RETRIEVAL, ICDAR 2024, 2024, : 37 - 42
  • [9] Joint Resource Management and Model Compression for Wireless Federated Learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [10] A Flexible Model Compression and Resource Allocation Scheme for Federated Learning
    Hu, Yiwen
    Liu, Tingting
    Yang, Chenyang
    Huang, Yuanfang
    Suo, Shiqiang
    IEEE Transactions on Machine Learning in Communications and Networking, 2023, 1 : 168 - 184