Learned Parameter Compression for Efficient and Privacy-Preserving Federated Learning

被引:0
|
作者
Chen, Yiming [1 ,2 ]
Abrahamyan, Lusine [3 ]
Sahli, Hichem [1 ,2 ]
Deligiannis, Nikos [1 ,2 ]
机构
[1] Vrije Univ Brussel, Dept Elect & Informat, B-1050 Brussels, Belgium
[2] Interuniv Microelekt Ctr, B-3001 Leuven, Belgium
[3] BeVi Best View, A-1190 Vienna, Austria
基金
比利时弗兰德研究基金会;
关键词
Deep learning; federated learning; data privacy; gradient compression; autoencoder;
D O I
10.1109/OJCOMS.2024.3409191
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) performs collaborative training of deep learning models among multiple clients, safeguarding data privacy, security, and legal adherence by preserving training data locally. Despite the benefits of FL, its wider implementation is hindered by communication overheads and potential privacy risks. Transiting locally updated model parameters between edge clients and servers demands high communication bandwidth, leading to high latency and Internet infrastructure constraints. Furthermore, recent works have shown that the malicious server can reconstruct clients' training data from gradients, significantly escalating privacy threats and violating regularizations. Different defense techniques have been proposed to address this information leakage from the gradient or updates, including introducing noise to gradients, performing model compression (such as sparsification), and feature perturbation. However, these methods either impede model convergence or entail substantial communication costs, further exacerbating the communication demands in FL. To develop an efficient and privacy-preserving FL, we introduce an autoencoder-based method for compressing and, thus, perturbing the model parameters. The client utilizes an autoencoder to acquire the representation of the local model parameters and then shares it as the compressed model parameters with the server, rather than the true model parameters. The use of the autoencoder for lossy compression serves as an effective protection against information leakage from the updates. Additionally, the perturbation is intrinsically linked to the autoencoder's input, thereby achieving a perturbation with respect to the parameters of different layers. Moreover, our approach can reduce $4.1 \times$ the communication rate compared to federated averaging. We empirically validate our method using two widely-used models within the context of federated learning, considering three datasets, and assess its performance against several well-established defense frameworks. The results indicate that our approach attains a model performance nearly identical to that of unmodified local updates, while effectively preventing information leakage and reducing communication costs in comparison to other methods, including noisy gradients, gradient sparsification, and PRECODE.
引用
收藏
页码:3503 / 3516
页数:14
相关论文
共 50 条
  • [1] Towards Efficient and Privacy-preserving Federated Deep Learning
    Hao, Meng
    Li, Hongwei
    Xu, Guowen
    Liu, Sen
    Yang, Haomiao
    [J]. ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [2] Efficient Privacy-Preserving Federated Learning With Unreliable Users
    Li, Yiran
    Li, Hongwei
    Xu, Guowen
    Huang, Xiaoming
    Lu, Rongxing
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11590 - 11603
  • [3] Efficient and privacy-preserving group signature for federated learning
    Kanchan, Sneha
    Jang, Jae Won
    Yoon, Jun Yong
    Choi, Bong Jun
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 147 : 93 - 106
  • [4] Efficient and Privacy-Preserving Federated Learning with Irregular Users
    Xu, Jieyu
    Li, Hongwei
    Zeng, Jia
    Hao, Meng
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 534 - 539
  • [5] An efficient privacy-preserving and verifiable scheme for federated learning
    Yang, Xue
    Ma, Minjie
    Tang, Xiaohu
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 160 : 238 - 250
  • [6] Round efficient privacy-preserving federated learning based on MKFHE
    Liu, Wenchao
    Zhou, Tanping
    Chen, Long
    Yang, Hongjian
    Han, Jiang
    Yang, Xiaoyuan
    [J]. COMPUTER STANDARDS & INTERFACES, 2024, 87
  • [7] Towards Efficient and Privacy-Preserving Federated Learning for HMM Training
    Zheng, Yandong
    Zhu, Hui
    Lu, Rongxing
    Zhang, Songnian
    Guan, Yunguo
    Wang, Fengwei
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 38 - 43
  • [8] Efficient and Privacy-Preserving Byzantine-robust Federated Learning
    Luan, Shijie
    Lu, Xiang
    Zhang, Zhuangzhuang
    Chang, Guangsheng
    Guo, Yunchuan
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 2202 - 2208
  • [9] ELXGB: An Efficient and Privacy-Preserving XGBoost for Vertical Federated Learning
    Xu, Wei
    Zhu, Hui
    Zheng, Yandong
    Wang, Fengwei
    Zhao, Jiaqi
    Liu, Zhe
    Li, Hui
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (03) : 878 - 892
  • [10] Efficient Verifiable Protocol for Privacy-Preserving Aggregation in Federated Learning
    Eltaras, Tamer
    Sabry, Farida
    Labda, Wadha
    Alzoubi, Khawla
    Malluhi, Qutaibah
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2977 - 2990