Secure Federated Learning with Model Compression

被引:1
|
作者
Ding, Yahao [1 ]
Shikh-Bahaei, Mohammad [1 ]
Huang, Chongwen [2 ]
Yuan, Weijie [3 ]
机构
[1] Kings Coll London, London, England
[2] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[3] Southern Univ Sci & Technol, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning (FL); deep leakage from gradients (DLG); resource block (RB) allocation;
D O I
10.1109/ICCWORKSHOPS57953.2023.10283697
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although federated Learning (FL) has become very popular recently, FL is vulnerable to gradient leakage attacks. Recent studies have shown that clients' private data can be reconstructed from shared models or gradients by attackers. Many existing works focus on adding privacy protection mechanisms to prevent user privacy leakage, such as differential privacy (DP) and homomorphic encryption. However, these defenses may cause an increase of computation and communication costs or degrade the performance of FL, and do not consider the impact of wireless network resources on the FL training process. Herein, we propose a defense method, weight compression, to prevent gradient leakage attacks for FL over wireless networks. The gradient compression matrix is determined by the user's location and channel conditions. Moreover, we also add Gaussian noise to the compressed gradients to strengthen the defense. This joint learning, wireless resource allocation and weight compression matrix is formulated as an optimization problem with the objective of minimizing the FL loss function. To find the solution, we first analyze the convergence rate of FL and quantify the effect of the weight matrix on FL convergence. Then, we seek the optimal resource block (RB) allocation by exhaustive search or ant colony optimization (ACO), and then use CVX toolbox to obtain the optimal weight matrix to minimize the optimization function. Our simulation results show that the optimized RB can accelerate the convergence of FL.
引用
收藏
页码:843 / 848
页数:6
相关论文
共 50 条
  • [31] Permissioned Blockchain Frame for Secure Federated Learning
    Sun, Jin
    Wu, Ying
    Wang, Shangping
    Fu, Yixue
    Chang, Xiao
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (01) : 13 - 17
  • [32] Secure Coalition Formation for Federated Machine Learning
    Thakur, Subhasis
    Breslin, John
    DEEP LEARNING THEORY AND APPLICATIONS, PT I, DELTA 2024, 2024, 2171 : 238 - 258
  • [33] Secure Federated Learning for Cognitive Radio Sensing
    Wasilewska, Malgorzata
    Bogucka, Hanna
    Poor, H. Vincent
    IEEE COMMUNICATIONS MAGAZINE, 2023, 61 (03) : 68 - 73
  • [34] Federated Learning Meets Blockchain to Secure the Metaverse
    Moudoud, Hajar
    Cherkaoui, Soumaya
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 339 - 344
  • [35] Fault Tolerant and Malicious Secure Federated Learning
    Karakoc, Ferhat
    Kupcu, Alptekin
    Onen, Melek
    CRYPTOLOGY AND NETWORK SECURITY, PT II, CANS 2024, 2025, 14906 : 73 - 95
  • [36] A Secure and Efficient Federated Learning Framework for NLP
    Deng, Jieren
    Wang, Chenghong
    Meng, Xianrui
    Wang, Yijue
    Li, Ji
    Lin, Sheng
    Han, Shuo
    Miao, Fei
    Rajasekaran, Sanguthevar
    Ding, Caiwen
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7676 - 7682
  • [37] Byzantine-Resilient Secure Federated Learning
    So, Jinhyun
    Guler, Basak
    Avestimehr, A. Salman
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (07) : 2168 - 2181
  • [38] Verifiable and Secure Aggregation Scheme for Federated Learning
    Ren Y.
    Fu Y.
    Li Y.
    Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (03): : 49 - 55
  • [39] EaSTFLy: Efficient and secure ternary federated learning
    Dong, Ye
    Chen, Xiaojun
    Shen, Liyan
    Wang, Dakui
    COMPUTERS & SECURITY, 2020, 94
  • [40] Quality Inference in Federated Learning with Secure Aggregation
    Pejó B.
    Biczók G.
    IEEE Transactions on Big Data, 2023, 9 (05): : 1430 - 1437