Accelerating convergence in wireless federated learning by sharing marginal data

被引:0
|
作者
Seo, Eunil [1 ]
Pham, Vinh [2 ]
Elmroth, Erik [1 ]
机构
[1] Umea Univ, Dept Comp Sci, Umea, Sweden
[2] Sungkyunkwan Univ, Comp Sci & Engn Dept, Suwon 16419, South Korea
关键词
Edge computing; federated learning; data sharing; wireless mobile network;
D O I
10.1109/ICOIN56518.2023.10048937
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deploying federated learning (FL) over wireless mobile networks can be expensive because of the cost of wireless communication resources. Efforts have been made to reduce communication costs by accelerating model convergence, leading to the development of model-driven methods based on feature extraction, model-integrated algorithms, and client selection. However, the resulting performance gains are limited by the dependence of neural network convergence on input data quality. This work, therefore, investigates the use of marginal shared data (e.g., a single data entry) to accelerate model convergence and thereby reduce communication costs in FL. Experimental results show that sharing even a single piece of data can improve performance by 14.6% and reduce communication costs by 61.13% when using the federated averaging algorithm (FedAvg). Marginal data sharing could therefore be an attractive and practical solution in privacy-flexible environments or collaborative operational systems such as fog robotics and vehicles. Moreover, by assigning new labels to the shared data, it is possible to extend the number of classifying labels of an FL model even when the initial input datasets lack the labels in question.
引用
收藏
页码:122 / 127
页数:6
相关论文
共 50 条
  • [1] Accelerating Convergence of Federated Learning in MEC With Dynamic Community
    Sun, Wen
    Zhao, Yong
    Ma, Wenqiang
    Guo, Bin
    Xu, Lexi
    Duong, Trung Q.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (02) : 1769 - 1784
  • [2] Felinet: Accelerating Federated Learning Convergence in Heterogeneous Edge Networks
    Lin, Canshu
    He, Dongbiao
    Ming, Zhongxing
    Cui, Laizhong
    PROCEEDINGS OF THE 2023 THE 2ND ACM WORKSHOP ON DATA PRIVACY AND FEDERATED LEARNING TECHNOLOGIES FOR MOBILE EDGE NETWORK, FEDEDGE 2023, 2023, : 125 - 130
  • [3] Accelerating Hybrid Federated Learning Convergence Under Partial Participation
    Bian, Jieming
    Wang, Lei
    Yang, Kun
    Shen, Cong
    Xu, Jie
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 3258 - 3271
  • [4] Accelerating DNN Training in Wireless Federated Edge Learning Systems
    Ren, Jinke
    Yu, Guanding
    Ding, Guangyao
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) : 219 - 232
  • [5] Device Scheduling with Fast Convergence for Wireless Federated Learning
    Shi, Wenqi
    Zhou, Sheng
    Niu, Zhisheng
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [6] Accelerating Split Federated Learning Over Wireless Communication Networks
    Xu, Ce
    Li, Jinxuan
    Liu, Yuan
    Ling, Yushi
    Wen, Miaowen
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 5587 - 5599
  • [7] Convergence Analysis for Wireless Federated Learning with Gradient Recycling
    Chen, Zhixiong
    Yi, Wenqiang
    Liu, Yuanwei
    Nallanathan, Arumugam
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 1232 - 1237
  • [8] Clustered Data Sharing for Non-IID Federated Learning over Wireless Networks
    Hu, Gang
    Teng, Yinglei
    Wang, Nan
    Yu, F. Richard
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1175 - 1180
  • [9] Accelerating Asynchronous Federated Learning Convergence via Opportunistic Mobile Relaying
    Bian, Jieming
    Xu, Jie
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10668 - 10680
  • [10] On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing
    Chellapandi, Vishnu Pandi
    Upadhyay, Antesh
    Hashemi, Abolfazl
    Zak, Stanislaw H.
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 2982 - 2987