Federated Ensemble Model-Based Reinforcement Learning in Edge Computing

被引:13
|
作者
Wang, Jin [1 ]
Hu, Jia [1 ]
Mills, Jed [1 ]
Min, Geyong [1 ]
Xia, Ming [2 ]
Georgalas, Nektarios [3 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4PY, England
[2] Google, Mountain View, CA 94043 USA
[3] British Telecommun PLC, Appl Res Dept, London EC1A 7AJ, England
基金
欧盟地平线“2020”; 英国工程与自然科学研究理事会;
关键词
Computational modeling; Data models; Heuristic algorithms; Training; Edge computing; Reinforcement learning; Analytical models; Deep reinforcement learning; distributed machine learning; edge computing; federated learning;
D O I
10.1109/TPDS.2023.3264480
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is a privacy-preserving distributed machine learning paradigm that enables collaborative training among geographically distributed and heterogeneous devices without gathering their data. Extending FL beyond the supervised learning models, federated reinforcement learning (FRL) was proposed to handle sequential decision-making problems in edge computing systems. However, the existing FRL algorithms directly combine model-free RL with FL, thus often leading to high sample complexity and lacking theoretical guarantees. To address the challenges, we propose a novel FRL algorithm that effectively incorporates model-based RL and ensemble knowledge distillation into FL for the first time. Specifically, we utilise FL and knowledge distillation to create an ensemble of dynamics models for clients, and then train the policy by solely using the ensemble model without interacting with the environment. Furthermore, we theoretically prove that the monotonic improvement of the proposed algorithm is guaranteed. The extensive experimental results demonstrate that our algorithm obtains much higher sample efficiency compared to classic model-free FRL algorithms in the challenging continuous control benchmark environments under edge computing settings. The results also highlight the significant impact of heterogeneous client data and local model update steps on the performance of FRL, validating the insights obtained from our theoretical analysis.
引用
收藏
页码:1848 / 1859
页数:12
相关论文
共 50 条
  • [1] Model-based Reinforcement Learning for Elastic Stream Processing in Edge Computing
    Xu, Jinlai
    Palanisamy, Balaji
    2021 IEEE 28TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING, DATA, AND ANALYTICS (HIPC 2021), 2021, : 292 - 301
  • [2] Offloading in Mobile Edge Computing Based on Federated Reinforcement Learning
    Dai, Yu
    Xue, Qing
    Gao, Zhen
    Zhang, Qiuhong
    Yang, Lei
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [3] Model-Based Reinforcement Learning for Quantized Federated Learning Performance Optimization
    Yang, Nuocheng
    Wang, Sihua
    Chen, Mingzhe
    Brinton, Christopher G.
    Yin, Changchuan
    Saad, Walid
    Cui, Shuguang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 5063 - 5068
  • [4] RSF: Reinforcement learning based hybrid split and federated learning for edge computing environments
    Soleimani, Alireza
    Anabestani, Negar
    Momtazpour, Mahmoud
    2024 32ND INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, ICEE 2024, 2024, : 836 - 842
  • [5] Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework
    Li, Henger
    Sun, Xiaolin
    Zheng, Zizhan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Task offloading mechanism based on federated reinforcement learning in mobile edge computing
    Li, Jie
    Yang, Zhiping
    Wang, Xingwei
    Xia, Yichao
    Ni, Shijian
    DIGITAL COMMUNICATIONS AND NETWORKS, 2023, 9 (02) : 492 - 504
  • [7] Task offloading mechanism based on federated reinforcement learning in mobile edge computing
    Jie Li
    Zhiping Yang
    Xingwei Wang
    Yichao Xia
    Shijian Ni
    Digital Communications and Networks, 2023, 9 (02) : 492 - 504
  • [8] Model-based Federated Reinforcement Distillation
    Ryu, Sefutsu
    Takamaeda-Yamazaki, Shinya
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1109 - 1114
  • [9] Node Selection Algorithm for Federated Learning Based on Deep Reinforcement Learning for Edge Computing in IoT
    Yan, Shuai
    Zhang, Peiying
    Huang, Siyu
    Wang, Jian
    Sun, Hao
    Zhang, Yi
    Tolba, Amr
    ELECTRONICS, 2023, 12 (11)
  • [10] Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning
    Wang, Yali
    Chen, Jiachao
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022