HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large Scale IoT Networks

被引:7
|
作者
Nguyen, Minh-Duong [1 ]
Lee, Sang-Min [1 ]
Pham, Quoc-Viet [2 ]
Hoang, Dinh Thai [3 ]
Nguyen, Diep N. [3 ]
Hwang, Won-Joo [4 ]
机构
[1] Pusan Natl Univ, Dept Informat Convergence Engn, Pusan 46241, South Korea
[2] Pusan Natl Univ, Korean Southeast Ctr Ind Revolut Leader Educ 4, Pusan 46241, South Korea
[3] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
[4] Pusan Natl Univ, Dept Biomed Convergence Engn, Yangsan 50612, South Korea
基金
澳大利亚研究理事会; 新加坡国家研究基金会;
关键词
Autoencoder; communication efficiency; data compression; deep learning; distributed learning; federated learning; internet-of-things; machine type communication;
D O I
10.1109/TMC.2022.3190510
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing. Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited. In this work, we develop a novel compression scheme for FL, called high-compression federated learning (HCFL) , for very large scale IoT networks. HCFL can reduce the data load for FL processes without changing their structure and hyperparameters. In this way, we not only can significantly reduce communication costs, but also make intensive learning processes more adaptable on low-computing resource IoT devices. Furthermore, we investigate a relationship between the number of IoT devices and the convergence level of the FL model and thereby better assess the quality of the FL process. We demonstrate our HCFL scheme in both simulations and mathematical analyses. Our proposed theoretical research can be used as a minimum level of satisfaction, proving that the FL process can achieve good performance when a determined configuration is met. Therefore, we show that HCFL is applicable in any FL-integrated networks with numerous IoT devices.
引用
下载
收藏
页码:6495 / 6507
页数:13
相关论文
共 50 条
  • [1] Ternary Compression for Communication-Efficient Federated Learning
    Xu, Jinjin
    Du, Wenli
    Jin, Yaochu
    He, Wangli
    Cheng, Ran
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 1162 - 1176
  • [2] Communication-Efficient Semihierarchical Federated Analytics in IoT Networks
    Zhao, Liang
    Valero, Maria
    Pouriyeh, Seyedamin
    Li, Lei
    Sheng, Quan Z.
    IEEE INTERNET OF THINGS JOURNAL, 2021, 9 (14) : 12614 - 12627
  • [3] Communication-Efficient Federated Learning for Digital Twin Edge Networks in Industrial IoT
    Lu, Yunlong
    Huang, Xiaohong
    Zhang, Ke
    Maharjan, Sabita
    Zhang, Yan
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (08) : 5709 - 5718
  • [4] Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT
    Mills, Jed
    Hu, Jia
    Min, Geyong
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5986 - 5994
  • [5] Communication-Efficient Federated Learning With Binary Neural Networks
    Yang, Yuzhi
    Zhang, Zhaoyang
    Yang, Qianqian
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (12) : 3836 - 3850
  • [6] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [7] FedSC: Compatible Gradient Compression for Communication-Efficient Federated Learning
    Yu, Xinlei
    Gao, Zhipeng
    Zhao, Chen
    Mo, Zijia
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT I, 2024, 14487 : 360 - 379
  • [8] Communication-Efficient Federated Multitask Learning Over Wireless Networks
    Ma, Haoyu
    Guo, Huayan
    Lau, Vincent K. N.
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (01) : 609 - 624
  • [9] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    ALGORITHMS, 2022, 15 (08)
  • [10] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,