COMMUNICATION-EFFICIENT ONLINE FEDERATED LEARNING FRAMEWORK FOR NONLINEAR REGRESSION

被引:8
|
作者
Gogineni, Vinay Chakravarthi [1 ]
Werner, Stefan [1 ]
Huang, Yih-Fang [2 ]
Kuh, Anthony [3 ]
机构
[1] Norwegian Univ Sci & Technol NTNU, Dept Elect Syst, Trondheim, Norway
[2] Univ Notre Dame, Dept Elect Engn, Notre Dame, IN 46556 USA
[3] Univ Hawaii, Dept Elect & Comp Engn, Honolulu, HI 96822 USA
关键词
Online federated learning; energy-efficiency; partial-sharing; kernel least mean square; random Fourier features;
D O I
10.1109/ICASSP43922.2022.9746228
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Federated learning (FL) literature typically assumes that each client has a fixed amount of data, which is unrealistic in many practical applications. Some recent works introduced a framework for online FL (Online-Fed) wherein clients perform model learning on streaming data and communicate the model to the server; however, they do not address the associated communication overhead. As a solution, this paper presents a partial-sharing-based online federated learning framework (PSO-Fed) that enables clients to update their local models using continuous streaming data and share only portions of those updated models with the server. During a global iteration of PSOFed, non-participant clients have the privilege to update their local models with new data. Here, we consider a global task of kernel regression, where clients use a random Fourier features-based kernel LMS on their data for local learning. We examine the mean convergence of the PSO-Fed for kernel regression. Experimental results show that PSO-Fed can achieve competitive performance with a significantly lower communication overhead than Online-Fed.
引用
收藏
页码:5228 / 5232
页数:5
相关论文
共 50 条
  • [1] Communication-Efficient Online Federated Learning Strategies for Kernel Regression
    Gogineni, Vinay Chakravarthi
    Werner, Stefan
    Huang, Yih-Fang
    Kuh, Anthony
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (05): : 4531 - 4544
  • [2] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [3] FedACA: An Adaptive Communication-Efficient Asynchronous Framework for Federated Learning
    Zhou, Shuang
    Huo, Yuankai
    Bao, Shunxing
    Landman, Bennett
    Gokhale, Aniruddha
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMIC COMPUTING AND SELF-ORGANIZING SYSTEMS (ACSOS 2022), 2022, : 71 - 80
  • [4] Communication-Efficient Vertical Federated Learning
    Khan, Afsana
    ten Thij, Marijn
    Wilbik, Anna
    [J]. ALGORITHMS, 2022, 15 (08)
  • [5] Communication-Efficient Adaptive Federated Learning
    Wang, Yujia
    Lin, Lu
    Chen, Jinghui
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] FLCP: federated learning framework with communication-efficient and privacy-preserving
    Yang, Wei
    Yang, Yuan
    Xi, Yingjie
    Zhang, Hailong
    Xiang, Wei
    [J]. APPLIED INTELLIGENCE, 2024, 54 (9-10) : 6816 - 6835
  • [7] Communication-Efficient Federated Learning with Heterogeneous Devices
    Chen, Zhixiong
    Yi, Wenqiang
    Liu, Yuanwei
    Nallanathan, Arumugam
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3602 - 3607
  • [8] Communication-Efficient Federated Learning with Adaptive Quantization
    Mao, Yuzhu
    Zhao, Zihao
    Yan, Guangfeng
    Liu, Yang
    Lan, Tian
    Song, Linqi
    Ding, Wenbo
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)
  • [9] FedBoost: Communication-Efficient Algorithms for Federated Learning
    Hamer, Jenny
    Mohri, Mehryar
    Suresh, Ananda Theertha
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [10] Communication-Efficient Secure Aggregation for Federated Learning
    Ergun, Irem
    Sami, Hasin Us
    Guler, Basak
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3881 - 3886