FLAC: Federated Learning with Autoencoder Compression and Convergence Guarantee

被引:10
|
作者
Beitollahi, Mahdi [1 ]
Lu, Ning [1 ]
机构
[1] Queens Univ, Dept Elect & Comp Engn, Kingston, ON K7L 3N6, Canada
关键词
D O I
10.1109/GLOBECOM48099.2022.10000743
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is considered the key approach for privacy-preserving, distributed machine learning (ML) systems. However, due to the transmission of large ML models from users to the server in each iteration of FL, communication on resource-constrained networks is currently a fundamental bottleneck in FL, restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification or quantization are utilized. However, these methods are pre-fixed and do not capture the redundant, correlated information across parameters of the ML models, user devices' data, and iterations of FL. Further, these methods do not fully take advantage of the error-correcting capability of the FL process. In this paper, we propose the Federated Learning with Autoencoder Compression (FLAC) approach that utilizes the redundant information and errorcorrecting capability of FL to compress user devices' models for uplink transmission. FLAC trains an autoencoder to encode and decode users' models at the server in the Training State, and then, sends the autoencoder to user devices for compressing local models for future iterations during the Compression State. To guarantee the convergence of the FL, FLAC dynamically controls the autoencoder error by switching between the Training State and Compression State to adjust its autoencoder and its compression rate based on the error tolerance of the FL system. We theoretically prove that FLAC converges for FL systems with strongly convex ML models and non-i.i.d. data distribution. Our extensive experimental results(1)over three datasets with different network architectures show that FLAC can achieve compression rates ranging from 83x to 875x while staying near 7 percent of the accuracy of the non-compressed FL systems.
引用
收藏
页码:4589 / 4594
页数:6
相关论文
共 50 条
  • [1] Personalized Federated Learning With Differential Privacy and Convergence Guarantee
    Wei, Kang
    Li, Jun
    Ma, Chuan
    Ding, Ming
    Chen, Wen
    Wu, Jun
    Tao, Meixia
    Poor, H. Vincent
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4488 - 4503
  • [2] Robust Softmax Aggregation on Blockchain based Federated Learning with Convergence Guarantee
    Wu, Huiyu
    Klabjan, Diego
    2024 IEEE INTERNATIONAL CONFERENCE ON OMNI-LAYER INTELLIGENT SYSTEMS, COINS 2024, 2024, : 293 - 296
  • [3] A Decentralized Federated Learning Framework via Committee Mechanism With Convergence Guarantee
    Che, Chunjiang
    Li, Xiaoli
    Chen, Chuan
    He, Xiaoyu
    Zheng, Zibin
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 4783 - 4800
  • [4] Fairness-aware Federated Minimax Optimization with Convergence Guarantee
    Dunda, Gerry Windiarto Mohamad
    Song, Shenghui
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 563 - 568
  • [5] Enhancing IoT Healthcare with Federated Learning and Variational Autoencoder
    Bhatti, Dost Muhammad Saqib
    Choi, Bong Jun
    SENSORS, 2024, 24 (11)
  • [6] FREPD: A Robust Federated Learning Framework on Variational Autoencoder
    Gu, Zhipin
    He, Liangzhong
    Li, Peiyan
    Sun, Peng
    Shi, Jiangyong
    Yang, Yuexiang
    COMPUTER SYSTEMS SCIENCE AND ENGINEERING, 2021, 39 (03): : 307 - 320
  • [7] On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
    Li, Chenyang
    Chung, Jihoon
    Du, Mengnan
    Wang, Haimin
    Zhou, Xianlian
    Shen, Bo
    arXiv, 2023,
  • [8] Secure Federated Learning with Model Compression
    Ding, Yahao
    Shikh-Bahaei, Mohammad
    Huang, Chongwen
    Yuan, Weijie
    2023 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS, 2023, : 843 - 848
  • [9] Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee
    Fan, Flint Xiaofeng
    Ma, Yining
    Dai, Zhongxiang
    Jing, Wei
    Tan, Cheston
    Low, Bryan Kian Hsiang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [10] Network Update Compression for Federated Learning
    Kathariya, Birendra
    Li, Li
    Li, Zhu
    Duan, Lingyu
    Liu, Shan
    2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2020, : 38 - 41