FLAC: Federated Learning with Autoencoder Compression and Convergence Guarantee

被引:10
|
作者
Beitollahi, Mahdi [1 ]
Lu, Ning [1 ]
机构
[1] Queens Univ, Dept Elect & Comp Engn, Kingston, ON K7L 3N6, Canada
关键词
D O I
10.1109/GLOBECOM48099.2022.10000743
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is considered the key approach for privacy-preserving, distributed machine learning (ML) systems. However, due to the transmission of large ML models from users to the server in each iteration of FL, communication on resource-constrained networks is currently a fundamental bottleneck in FL, restricting the ML model complexity and user participation. One of the notable trends to reduce the communication cost of FL systems is gradient compression, in which techniques in the form of sparsification or quantization are utilized. However, these methods are pre-fixed and do not capture the redundant, correlated information across parameters of the ML models, user devices' data, and iterations of FL. Further, these methods do not fully take advantage of the error-correcting capability of the FL process. In this paper, we propose the Federated Learning with Autoencoder Compression (FLAC) approach that utilizes the redundant information and errorcorrecting capability of FL to compress user devices' models for uplink transmission. FLAC trains an autoencoder to encode and decode users' models at the server in the Training State, and then, sends the autoencoder to user devices for compressing local models for future iterations during the Compression State. To guarantee the convergence of the FL, FLAC dynamically controls the autoencoder error by switching between the Training State and Compression State to adjust its autoencoder and its compression rate based on the error tolerance of the FL system. We theoretically prove that FLAC converges for FL systems with strongly convex ML models and non-i.i.d. data distribution. Our extensive experimental results(1)over three datasets with different network architectures show that FLAC can achieve compression rates ranging from 83x to 875x while staying near 7 percent of the accuracy of the non-compressed FL systems.
引用
收藏
页码:4589 / 4594
页数:6
相关论文
共 50 条
  • [41] Adaptive Compression in Federated Learning via Side Information
    Isik, Berivan
    Pase, Francesco
    Gunduz, Deniz
    Koyejo, Sanmi
    Weissman, Tsachy
    Zorzi, Michele
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [42] An Adaptive Compression and Communication Framework for Wireless Federated Learning
    Yang, Yang
    Dang, Shuping
    Zhang, Zhenrong
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 10835 - 10854
  • [43] Wyner-ziv gradient compression for federated learning
    Liang, Kai
    Zhong, Huiru
    Chen, Haoning
    Wu, Youlong
    arXiv, 2021,
  • [44] ClusterGrad: Adaptive Gradient Compression by Clustering in Federated Learning
    Cui, Laizhong
    Su, Xiaoxin
    Zhou, Yipeng
    Zhang, Lei
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [45] Efficient Client Sampling with Compression in Heterogeneous Federated Learning
    Marnissi, Ouiame
    El Hammouti, Hajar
    Bergou, El Houcine
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024, 2024,
  • [46] Joint Compression and Deadline Optimization for Wireless Federated Learning
    Zhang, Maojun
    Li, Yang
    Liu, Dongzhu
    Jin, Richeng
    Zhu, Guangxu
    Zhong, Caijun
    Quek, Tony Q. S.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (07) : 7939 - 7951
  • [47] Adaptive Federated Learning With Gradient Compression in Uplink NOMA
    Sun, Haijian
    Ma, Xiang
    Hu, Rose Qingyang
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (12) : 16325 - 16329
  • [48] Gradient Compression with a Variational Coding Scheme for Federated Learning
    Kathariya, Birendra
    Li, Zhu
    Chen, Jianle
    Van der Auwera, Geert
    2021 INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2021,
  • [49] Ternary Compression for Communication-Efficient Federated Learning
    Xu, Jinjin
    Du, Wenli
    Jin, Yaochu
    He, Wangli
    Cheng, Ran
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 1162 - 1176
  • [50] Trusted Encrypted Traffic Intrusion Detection Method Based on Federated Learning and Autoencoder
    Wang, Zixuan
    Miao, Cheng
    Xu, Yuhua
    Li, Zeyi
    Sun, Zhixin
    Wang, Pan
    CHINA COMMUNICATIONS, 2024, 21 (08) : 211 - 235