Decentralized Offloading Strategies Based on Reinforcement Learning for Multi-Access Edge Computing

被引:3
|
作者
Hu, Chunyang [1 ,2 ]
Li, Jingchen [3 ]
Shi, Haobin [3 ]
Ning, Bin [2 ]
Gu, Qiong [2 ]
机构
[1] Hubei Univ Arts & Sci, Hubei Key Lab Power Syst Design & Test Elect Vehi, Xiangyang 441053, Peoples R China
[2] Hubei Univ Arts & Sci, Sch Comp Engn, Xiangyang 441053, Peoples R China
[3] Northwestern Polytech Univ, Sch Comp Sci & Engn, Xian 710129, Peoples R China
关键词
multi-access edge computing; deep reinforcement learning; task offloading; RESOURCE-ALLOCATION;
D O I
10.3390/info12090343
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Using reinforcement learning technologies to learn offloading strategies for multi-access edge computing systems has been developed by researchers. However, large-scale systems are unsuitable for reinforcement learning, due to their huge state spaces and offloading behaviors. For this reason, this work introduces the centralized training and decentralized execution mechanism, designing a decentralized reinforcement learning model for multi-access edge computing systems. Considering a cloud server and several edge servers, we separate the training and execution in the reinforcement learning model. The execution happens in edge devices of the system, and edge servers need no communication. Conversely, the training process occurs at the cloud device, which causes a lower transmission latency. The developed method uses a deep deterministic policy gradient algorithm to optimize offloading strategies. The simulated experiment shows that our method can learn the offloading strategy for each edge device efficiently.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Dependent Task Offloading in Multi-Access Edge Computing
    Ye, Hengzhou
    Li, Jiaming
    Lu, Qiu
    IEEE ACCESS, 2024, 12 : 166281 - 166297
  • [2] Graph Attention Network Reinforcement Learning Based Computation Offloading in Multi-Access Edge Computing
    Liu, Yuxuan
    Xia, Geming
    Chen, Jian
    Zhang, Danlei
    2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC, 2023, : 966 - 969
  • [3] Optimization for computational offloading in multi-access edge computing: A deep reinforcement learning scheme
    Wang, Jian
    Ke, Hongchang
    Liu, Xuejie
    Wang, Hui
    Computer Networks, 2022, 204
  • [4] Optimization for computational offloading in multi-access edge computing: A deep reinforcement learning scheme
    Wang, Jian
    Ke, Hongchang
    Liu, Xuejie
    Wang, Hui
    COMPUTER NETWORKS, 2022, 204
  • [5] Safety-Critical Offloading with Constrained Reinforcement Learning for Multi-access Edge Computing
    Huang, Hui
    Ye, Qiang
    Zhou, Yitong
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2025, 21 (02)
  • [6] Computation Offloading in Multi-Access Edge Computing Using a Deep Sequential Model Based on Reinforcement Learning
    Wang, Jin
    Hu, Jia
    Min, Geyong
    Zhan, Wenhan
    Ni, Qiang
    Georgalas, Nektarios
    IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (05) : 64 - 69
  • [7] Graph convolutional network-based reinforcement learning for tasks offloading in multi-access edge computing
    Leng, Lixiong
    Li, Jingchen
    Shi, Haobin
    Zhu, Yi'an
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (19) : 29163 - 29175
  • [8] Graph convolutional network-based reinforcement learning for tasks offloading in multi-access edge computing
    Lixiong Leng
    Jingchen Li
    Haobin Shi
    Yi’an Zhu
    Multimedia Tools and Applications, 2021, 80 : 29163 - 29175
  • [9] Offloading dependent tasks in multi-access edge computing: A multi-objective reinforcement learning approach
    Song, Fuhong
    Xing, Huanlai
    Wang, Xinhan
    Luo, Shouxi
    Dai, Penglin
    Li, Ke
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 128 : 333 - 348
  • [10] A Joint Caching and Offloading Strategy Using Reinforcement Learning for Multi-access Edge Computing Users
    Yuan, Yuan
    Su, Wei
    Hong, Gaofeng
    Li, Haoru
    Wang, Chang
    MOBILE NETWORKS & APPLICATIONS, 2024,