Learned Parameter Compression for Efficient and Privacy-Preserving Federated Learning

被引:0
|
作者
Chen, Yiming [1 ,2 ]
Abrahamyan, Lusine [3 ]
Sahli, Hichem [1 ,2 ]
Deligiannis, Nikos [1 ,2 ]
机构
[1] Vrije Univ Brussel, Dept Elect & Informat, B-1050 Brussels, Belgium
[2] Interuniv Microelekt Ctr, B-3001 Leuven, Belgium
[3] BeVi Best View, A-1190 Vienna, Austria
基金
比利时弗兰德研究基金会;
关键词
Deep learning; federated learning; data privacy; gradient compression; autoencoder;
D O I
10.1109/OJCOMS.2024.3409191
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) performs collaborative training of deep learning models among multiple clients, safeguarding data privacy, security, and legal adherence by preserving training data locally. Despite the benefits of FL, its wider implementation is hindered by communication overheads and potential privacy risks. Transiting locally updated model parameters between edge clients and servers demands high communication bandwidth, leading to high latency and Internet infrastructure constraints. Furthermore, recent works have shown that the malicious server can reconstruct clients' training data from gradients, significantly escalating privacy threats and violating regularizations. Different defense techniques have been proposed to address this information leakage from the gradient or updates, including introducing noise to gradients, performing model compression (such as sparsification), and feature perturbation. However, these methods either impede model convergence or entail substantial communication costs, further exacerbating the communication demands in FL. To develop an efficient and privacy-preserving FL, we introduce an autoencoder-based method for compressing and, thus, perturbing the model parameters. The client utilizes an autoencoder to acquire the representation of the local model parameters and then shares it as the compressed model parameters with the server, rather than the true model parameters. The use of the autoencoder for lossy compression serves as an effective protection against information leakage from the updates. Additionally, the perturbation is intrinsically linked to the autoencoder's input, thereby achieving a perturbation with respect to the parameters of different layers. Moreover, our approach can reduce $4.1 \times$ the communication rate compared to federated averaging. We empirically validate our method using two widely-used models within the context of federated learning, considering three datasets, and assess its performance against several well-established defense frameworks. The results indicate that our approach attains a model performance nearly identical to that of unmodified local updates, while effectively preventing information leakage and reducing communication costs in comparison to other methods, including noisy gradients, gradient sparsification, and PRECODE.
引用
收藏
页码:3503 / 3516
页数:14
相关论文
共 50 条
  • [21] Privacy-preserving Techniques in Federated Learning
    Liu Y.-X.
    Chen H.
    Liu Y.-H.
    Li C.-P.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (03): : 1057 - 1092
  • [22] Federated learning for privacy-preserving AI
    Cheng, Yong
    Liu, Yang
    Chen, Tianjian
    Yang, Qiang
    COMMUNICATIONS OF THE ACM, 2020, 63 (12) : 33 - 36
  • [23] Privacy-Preserving and Reliable Federated Learning
    Lu, Yi
    Zhang, Lei
    Wang, Lulu
    Gao, Yuanyuan
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT III, 2022, 13157 : 346 - 361
  • [24] Efficient Privacy-Preserving Federated Learning Against Inference Attacks for IoT
    Miao, Yifeng
    Chen, Siguang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [25] SAEV: Secure Aggregation and Efficient Verification for Privacy-Preserving Federated Learning
    Wang, Junkai
    Wang, Rong
    Xiong, Ling
    Xiong, Neal
    Liu, Zhicai
    IEEE Internet of Things Journal, 2024, 11 (24) : 39681 - 39696
  • [26] Communication-Efficient and Privacy-Preserving Aggregation in Federated Learning With Adaptability
    Sun, Xuehua
    Yuan, Zengsen
    Kong, Xianguang
    Xue, Liang
    He, Lang
    Lin, Ying
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (15): : 26430 - 26443
  • [27] Anonymous and Efficient Authentication Scheme for Privacy-Preserving Federated Cross Learning
    Li, Zeshuai
    Liang, Xiaoyan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IX, ICIC 2024, 2024, 14870 : 281 - 293
  • [28] Communication-Efficient and Privacy-Preserving Verifiable Aggregation for Federated Learning
    Peng, Kaixin
    Shen, Xiaoying
    Gao, Le
    Wang, Baocang
    Lu, Yichao
    ENTROPY, 2023, 25 (08)
  • [29] An Efficient Federated Learning Framework for Privacy-Preserving Data Aggregation in IoT
    Shi, Rongquan
    Wei, Lifei
    Zhang, Lei
    2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST, 2023, : 385 - 391
  • [30] Secure Dataset Condensation for Privacy-Preserving and Efficient Vertical Federated Learning
    Gao, Dashan
    Wu, Canhui
    Zhang, Xiaojin
    Yao, Xin
    Yang, Qiang
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT I, ECML PKDD 2024, 2024, 14941 : 212 - 229