Learned Parameter Compression for Efficient and Privacy-Preserving Federated Learning

被引:0
|
作者
Chen, Yiming [1 ,2 ]
Abrahamyan, Lusine [3 ]
Sahli, Hichem [1 ,2 ]
Deligiannis, Nikos [1 ,2 ]
机构
[1] Vrije Univ Brussel, Dept Elect & Informat, B-1050 Brussels, Belgium
[2] Interuniv Microelekt Ctr, B-3001 Leuven, Belgium
[3] BeVi Best View, A-1190 Vienna, Austria
基金
比利时弗兰德研究基金会;
关键词
Deep learning; federated learning; data privacy; gradient compression; autoencoder;
D O I
10.1109/OJCOMS.2024.3409191
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Federated learning (FL) performs collaborative training of deep learning models among multiple clients, safeguarding data privacy, security, and legal adherence by preserving training data locally. Despite the benefits of FL, its wider implementation is hindered by communication overheads and potential privacy risks. Transiting locally updated model parameters between edge clients and servers demands high communication bandwidth, leading to high latency and Internet infrastructure constraints. Furthermore, recent works have shown that the malicious server can reconstruct clients' training data from gradients, significantly escalating privacy threats and violating regularizations. Different defense techniques have been proposed to address this information leakage from the gradient or updates, including introducing noise to gradients, performing model compression (such as sparsification), and feature perturbation. However, these methods either impede model convergence or entail substantial communication costs, further exacerbating the communication demands in FL. To develop an efficient and privacy-preserving FL, we introduce an autoencoder-based method for compressing and, thus, perturbing the model parameters. The client utilizes an autoencoder to acquire the representation of the local model parameters and then shares it as the compressed model parameters with the server, rather than the true model parameters. The use of the autoencoder for lossy compression serves as an effective protection against information leakage from the updates. Additionally, the perturbation is intrinsically linked to the autoencoder's input, thereby achieving a perturbation with respect to the parameters of different layers. Moreover, our approach can reduce $4.1 \times$ the communication rate compared to federated averaging. We empirically validate our method using two widely-used models within the context of federated learning, considering three datasets, and assess its performance against several well-established defense frameworks. The results indicate that our approach attains a model performance nearly identical to that of unmodified local updates, while effectively preventing information leakage and reducing communication costs in comparison to other methods, including noisy gradients, gradient sparsification, and PRECODE.
引用
收藏
页码:3503 / 3516
页数:14
相关论文
共 50 条
  • [41] Privacy-preserving federated learning for radiotherapy applications
    Hayati, H.
    Heijmans, S.
    Persoon, L.
    Murguia, C.
    van de Wouw, N.
    [J]. RADIOTHERAPY AND ONCOLOGY, 2023, 182 : S238 - S240
  • [42] POSTER: Privacy-preserving Federated Active Learning
    Kurniawan, Hendra
    Mambo, Masahiro
    [J]. SCIENCE OF CYBER SECURITY, SCISEC 2022 WORKSHOPS, 2022, 1680 : 223 - 226
  • [43] AddShare: A Privacy-Preserving Approach for Federated Learning
    Asare, Bernard Atiemo
    Branco, Paula
    Kiringa, Iluju
    Yeap, Tet
    [J]. COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, PT I, 2024, 14398 : 299 - 309
  • [44] PPFLV: privacy-preserving federated learning with verifiability
    Zhou, Qun
    Shen, Wenting
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (09): : 12727 - 12743
  • [45] A Syntactic Approach for Privacy-Preserving Federated Learning
    Choudhury, Olivia
    Gkoulalas-Divanis, Aris
    Salonidis, Theodoros
    Sylla, Issa
    Park, Yoonyoung
    Hsu, Grace
    Das, Amar
    [J]. ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 1762 - 1769
  • [46] Contribution Measurement in Privacy-Preserving Federated Learning
    Hsu, Ruei-Hau
    Yu, Yi-An
    Su, Hsuan-Cheng
    [J]. Journal of Information Science and Engineering, 2024, 40 (06) : 1173 - 1196
  • [47] Privacy-Preserving Federated Learning in Fog Computing
    Zhou, Chunyi
    Fu, Anmin
    Yu, Shui
    Yang, Wei
    Wang, Huaqun
    Zhang, Yuqing
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 10782 - 10793
  • [48] Federated Learning for Privacy-Preserving Speaker Recognition
    Woubie, Abraham
    Backstrom, Tom
    [J]. IEEE ACCESS, 2021, 9 : 149477 - 149485
  • [49] Privacy-Preserving Decentralized Aggregation for Federated Learning
    Jeon, Beomyeol
    Ferdous, S. M.
    Rahmant, Muntasir Raihan
    Walid, Anwar
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [50] Privacy-Preserving Federated Learning via Disentanglement
    Zhou, Wenjie
    Li, Piji
    Han, Zhaoyang
    Lu, Xiaozhen
    Li, Juan
    Ren, Zhaochun
    Liu, Zhe
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3606 - 3615