HCFL: A High Compression Approach for Communication-Efficient Federated Learning in Very Large Scale IoT Networks

被引:7
|
作者
Nguyen, Minh-Duong [1 ]
Lee, Sang-Min [1 ]
Pham, Quoc-Viet [2 ]
Hoang, Dinh Thai [3 ]
Nguyen, Diep N. [3 ]
Hwang, Won-Joo [4 ]
机构
[1] Pusan Natl Univ, Dept Informat Convergence Engn, Pusan 46241, South Korea
[2] Pusan Natl Univ, Korean Southeast Ctr Ind Revolut Leader Educ 4, Pusan 46241, South Korea
[3] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
[4] Pusan Natl Univ, Dept Biomed Convergence Engn, Yangsan 50612, South Korea
基金
澳大利亚研究理事会; 新加坡国家研究基金会;
关键词
Autoencoder; communication efficiency; data compression; deep learning; distributed learning; federated learning; internet-of-things; machine type communication;
D O I
10.1109/TMC.2022.3190510
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a new artificial intelligence concept that enables Internet-of-Things (IoT) devices to learn a collaborative model without sending the raw data to centralized nodes for processing. Despite numerous advantages, low computing resources at IoT devices and high communication costs for exchanging model parameters make applications of FL in massive IoT networks very limited. In this work, we develop a novel compression scheme for FL, called high-compression federated learning (HCFL) , for very large scale IoT networks. HCFL can reduce the data load for FL processes without changing their structure and hyperparameters. In this way, we not only can significantly reduce communication costs, but also make intensive learning processes more adaptable on low-computing resource IoT devices. Furthermore, we investigate a relationship between the number of IoT devices and the convergence level of the FL model and thereby better assess the quality of the FL process. We demonstrate our HCFL scheme in both simulations and mathematical analyses. Our proposed theoretical research can be used as a minimum level of satisfaction, proving that the FL process can achieve good performance when a determined configuration is met. Therefore, we show that HCFL is applicable in any FL-integrated networks with numerous IoT devices.
引用
下载
收藏
页码:6495 / 6507
页数:13
相关论文
共 50 条
  • [21] Communication-Efficient Federated Learning for Large-Scale Multiagent Systems in ISAC: Data Augmentation With Reinforcement Learning
    Ouyang, Wenjiang
    Liu, Qian
    Mu, Junsheng
    AI-Dulaimi, Anwer
    Jing, Xiaojun
    Liu, Qilie
    IEEE SYSTEMS JOURNAL, 2024, : 1893 - 1904
  • [22] Communication-Efficient Federated Learning for Decision Trees
    Zhao, Shuo
    Zhu, Zikun
    Li, Xin
    Chen, Ying-Chi
    IEEE Transactions on Artificial Intelligence, 2024, 5 (11): : 5478 - 5492
  • [23] Communication-Efficient Federated Learning with Adaptive Quantization
    Mao, Yuzhu
    Zhao, Zihao
    Yan, Guangfeng
    Liu, Yang
    Lan, Tian
    Song, Linqi
    Ding, Wenbo
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [24] Communication-Efficient Secure Aggregation for Federated Learning
    Ergun, Irem
    Sami, Hasin Us
    Guler, Basak
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3881 - 3886
  • [25] FedBoost: Communication-Efficient Algorithms for Federated Learning
    Hamer, Jenny
    Mohri, Mehryar
    Suresh, Ananda Theertha
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [26] Communication-efficient Subspace Methods for High-dimensional Federated Learning
    Shi, Zai
    Eryilmaz, Atilla
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 543 - 550
  • [27] Communication-Efficient Federated Learning Over Capacity-Limited Wireless Networks
    Yun, Jaewon
    Oh, Yongjeong
    Jeon, Yo-Seb
    Vincent Poor, H.
    IEEE Transactions on Cognitive Communications and Networking, 2025, 11 (01): : 621 - 637
  • [28] Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC
    Yaoyao Ren
    Yu Cao
    Chengyin Ye
    Xu Cheng
    Scientific Reports, 13
  • [29] Two-layer accumulated quantized compression for communication-efficient federated learning: TLAQC
    Ren, Yaoyao
    Cao, Yu
    Ye, Chengyin
    Cheng, Xu
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [30] Communication-Efficient Federated Learning in Drone-Assisted IoT Networks: Path Planning and Enhanced Knowledge Distillation Techniques
    Gad, Gad
    Farrag, Aya
    Fadlullah, Zubair Md
    Fouda, Mostafa M.
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,