FLAC: Federated Learning with Autoencoder Compression and Convergence Guarantee

被引:10
|
作者
Beitollahi, Mahdi [1 ]
Lu, Ning [1 ]
机构
[1] Queens Univ, Dept Elect & Comp Engn, Kingston, ON K7L 3N6, Canada
关键词
D O I
10.1109/GLOBECOM48099.2022.10000743
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is considered the key approach for privacy-preserving, distributed machine learning (ML) systems. However, due to the transmission of large ML models from users to the server in each iteration of FL, communication on resource-constrained networks is currently a fundamental bottleneck in FL, restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification or quantization are utilized. However, these methods are pre-fixed and do not capture the redundant, correlated information across parameters of the ML models, user devices' data, and iterations of FL. Further, these methods do not fully take advantage of the error-correcting capability of the FL process. In this paper, we propose the Federated Learning with Autoencoder Compression (FLAC) approach that utilizes the redundant information and errorcorrecting capability of FL to compress user devices' models for uplink transmission. FLAC trains an autoencoder to encode and decode users' models at the server in the Training State, and then, sends the autoencoder to user devices for compressing local models for future iterations during the Compression State. To guarantee the convergence of the FL, FLAC dynamically controls the autoencoder error by switching between the Training State and Compression State to adjust its autoencoder and its compression rate based on the error tolerance of the FL system. We theoretically prove that FLAC converges for FL systems with strongly convex ML models and non-i.i.d. data distribution. Our extensive experimental results(1)over three datasets with different network architectures show that FLAC can achieve compression rates ranging from 83x to 875x while staying near 7 percent of the accuracy of the non-compressed FL systems.
引用
收藏
页码:4589 / 4594
页数:6
相关论文
共 50 条
  • [31] On the Convergence of Federated Learning Algorithms Without Data Similarity
    Beikmohammadi, Ali
    Khirirat, Sarit
    Magnusson, Sindri
    IEEE TRANSACTIONS ON BIG DATA, 2025, 11 (02) : 659 - 668
  • [32] CRPO: A New Approach for Safe Reinforcement Learning with Convergence Guarantee
    Xu, Tengyu
    Liang, Yingbin
    Lan, Guanghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] Convergence Analysis of Sequential Federated Learning on Heterogeneous Data
    Li, Yipeng
    Lyu, Xinchen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] ASYNCHRONOUS FEDERATED LEARNING: CONVERGENCE AND PERFORMANCE IN HETEROGENEOUS ENVIRONMENTS
    Badea, Dan Gabriel
    Ciocîrlan, Ștefan-Dan
    Rughiniș, Răzvan-Victor
    Țurcanu, Dinu
    UPB Scientific Bulletin, Series C: Electrical Engineering and Computer Science, 2024, 86 (04): : 19 - 30
  • [35] Exploring Normalization for High Convergence on Federated Learning for Drones
    Vieira, Flávio
    Campos, Carlos Alberto V.
    Journal of the Brazilian Computer Society, 2024, 30 (01) : 496 - 508
  • [36] Lightweight Federated Learning for Large-Scale IoT Devices With Privacy Guarantee
    Wei, Zhaohui
    Pei, Qingqi
    Zhang, Ning
    Liu, Xuefeng
    Wu, Celimuge
    Taherkordi, Amirhosein
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (04) : 3179 - 3191
  • [37] An Efficiency-Boosting Client Selection Scheme for Federated Learning With Fairness Guarantee
    Huang, Tiansheng
    Lin, Weiwei
    Wu, Wentai
    He, Ligang
    Li, Keqin
    Zomaya, Albert
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (07) : 1552 - 1564
  • [38] Intrinsic Gradient Compression for Scalable and Efficient Federated Learning
    Melas-Kyriazi, Luke
    Wang, Franklyn
    PROCEEDINGS OF THE FIRST WORKSHOP ON FEDERATED LEARNING FOR NATURAL LANGUAGE PROCESSING (FL4NLP 2022), 2022, : 27 - 41
  • [39] Federated Learning with Compression: Unified Analysis and Sharp Guarantees
    Haddadpour, Farzin
    Kamani, Mohammad Mahdi
    Mokhtari, Aryan
    Mahdavi, Mehrdad
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [40] Model compression and privacy preserving framework for federated learning
    Zhu, Xi
    Wang, Junbo
    Chen, Wuhui
    Sato, Kento
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 140 : 376 - 389