AMBLE: Adjusting mini-batch and local epoch for federated learning with heterogeneous devices

被引:11
|
作者
Park J. [1 ]
Yoon D. [1 ]
Yeo S. [1 ]
Oh S. [1 ]
机构
[1] Department of Artificial Intelligence, Ajou University, Suwon
基金
新加坡国家研究基金会;
关键词
Federated averaging; Federated learning; Local mini-batch SGD; System heterogeneity;
D O I
10.1016/j.jpdc.2022.07.009
中图分类号
学科分类号
摘要
As data privacy becomes increasingly important, federated learning applied to the training of deep learning models while ensuring the data privacy of devices is entering the spotlight. Federated learning makes it possible to process all data at once while processing data independently from various devices without collecting distributed local data in a central server. However, there are still challenges to overcome for the system of devices in federated learning such as communication overheads and the heterogeneity of the system. In this paper, we propose the Adjusting Mini-Batch and Local Epoch (AMBLE) approach, which adaptively adjusts the local mini-batch and local epoch size for heterogeneous devices in federated learning and updates the parameters synchronously. With AMBLE, we enhance the computational efficiency by removing stragglers and scaling the local learning rate to improve the model convergence rate and accuracy. We verify that federated learning with AMBLE is a stably trained model with a faster convergence speed and higher accuracy than FedAvg and adaptive batch size scheme for both identically and independently distributed (IID) and non-IID cases. © 2022 Elsevier Inc.
引用
收藏
页码:13 / 23
页数:10
相关论文
共 50 条
  • [1] Sample-based Federated Learning via Mini-batch SSCA
    Ye, Chencheng
    Cui, Ying
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [2] A lightweight mini-batch federated learning approach for attack detection in IoT
    Ahmad, Mir Shahnawaz
    Shah, Shahid Mehraj
    [J]. INTERNET OF THINGS, 2024, 25
  • [3] DYNAMITE: Dynamic Interplay of Mini-Batch Size and Aggregation Frequency for Federated Learning With Static and Streaming Datasets
    Liu, Weijie
    Zhang, Xiaoxi
    Duan, Jingpu
    Joe-Wong, Carlee
    Zhou, Zhi
    Chen, Xu
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (07) : 7664 - 7679
  • [4] A MINI-BATCH STOCHASTIC GRADIENT METHOD FOR SPARSE LEARNING TO RANK
    Cheng, Fan
    Wang, Dongliang
    Zhang, Lei
    Su, Yansen
    Qiu, Jianfeng
    Suo, Yi
    [J]. INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2018, 14 (04): : 1207 - 1221
  • [5] Accelerate Mini-batch Machine Learning Training With Dynamic Batch Size Fitting
    Liu, Baohua
    Shen, Wenfeng
    Li, Peng
    Zhu, Xin
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Mini-batch learning of exponential family finite mixture models
    Nguyen, Hien D.
    Forbes, Florence
    McLachlan, Geoffrey J.
    [J]. STATISTICS AND COMPUTING, 2020, 30 (04) : 731 - 748
  • [7] Mini-batch learning of exponential family finite mixture models
    Hien D. Nguyen
    Florence Forbes
    Geoffrey J. McLachlan
    [J]. Statistics and Computing, 2020, 30 : 731 - 748
  • [8] Two-Stage Clustering for Federated Learning with Pseudo Mini-batch SGD Training on Non-IID Data
    Weng, Jianqing
    Su, Songzhi
    Fan, Xiaoliang
    [J]. COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, CHINESECSCW 2021, PT I, 2022, 1491 : 29 - 43
  • [9] A Learning Algorithm with a Gradient Normalization and a Learning Rate Adaptation for the Mini-batch Type Learning
    Ito, Daiki
    Okamoto, Takashi
    Koakutsu, Seiichi
    [J]. 2017 56TH ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS OF JAPAN (SICE), 2017, : 811 - 816
  • [10] Improving Scalability of Parallel CNN Training by Adjusting Mini-Batch Size at Run-Time
    Lee, Sunwoo
    Kang, Qiao
    Madireddy, Sandeep
    Balaprakash, Prasanna
    Agrawal, Ankit
    Choudhary, Alok
    Archibald, Richard
    Liao, Wei-keng
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 830 - 839