Hybrid Decision Based Deep Reinforcement Learning For Energy Harvesting Enabled Mobile Edge Computing

被引:0
|
作者
Zhang, Jing [1 ]
Du, Jun [1 ]
Wang, Jian [1 ]
Shen, Yuan [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
基金
中国博士后科学基金;
关键词
ALLOCATION;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
For the next generation of communication systems, low latency is an urging requirement to satisfy the increasing computation requires. In response, mobile edge computing (MEC) with energy harvesting (EH) is a promising technology to achieve sustained improvement of the computation experience. However, the frequently varied harvested energy, coupled with variable computing tasks and changing computation capacity of servers, results in the high dynamics of the computation offloading problem. In order to get satisfactory computation quality for such a high dynamic offloading problem, devices should learn to make multiple continuous and discrete actions when optimizing the system performance, such as latency, energy efficiency, etc. In this paper, we propose a continuous-discrete hybrid decision based deep reinforcement learning algorithm for dynamic computation offloading. Specifically, the actor outputs continuous actions (offloading ratio and local computation capacity) corresponding to every server. On the other hand, the critic outputs the discrete action (server selection) while also evaluates the performance of the actor for neural network updating. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves better performance compared with the discrete decision based deep reinforcement learning methods.
引用
收藏
页码:2100 / 2105
页数:6
相关论文
共 50 条
  • [1] Deep Reinforcement Learning-Based Offloading Decision Optimization in Mobile Edge Computing
    Zhang, Hao
    Wu, Wenjun
    Wang, Chaoyi
    Li, Meng
    Yang, Ruizhe
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [2] Deep Reinforcement Learning for Blockchain-Enabled Mobile Edge Computing Systems
    Li, Jie
    Feng, Jie
    Pei, Qingqi
    Du, Jianbo
    2020 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2020,
  • [3] UAV-IRS-assisted energy harvesting for edge computing based on deep reinforcement learning
    Pang, Shanchen
    Wang, Luqi
    Gui, Haiyuan
    Qiao, Sibo
    He, Xiao
    Zhao, Zhiyuan
    Future Generation Computer Systems, 2025, 163
  • [4] Deep Reinforcement Learning for Task Allocation in UAV-enabled Mobile Edge Computing
    Yu, Changliang
    Du, Wei
    Ren, Fan
    Zhao, Nan
    ADVANCES IN INTELLIGENT NETWORKING AND COLLABORATIVE SYSTEMS (INCOS-2021), 2022, 312 : 225 - 232
  • [5] Deep Reinforcement Learning based Energy Scheduling for Edge Computing
    Yang, Qinglin
    Li, Peng
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 175 - 180
  • [6] Deep Reinforcement Learning and Optimization Based Green Mobile Edge Computing
    Yang, Yang
    Hu, Yulin
    Gursoy, M. Cenk
    2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,
  • [7] Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing
    Gao, Xiaohu
    Ang, Mei Choo
    Althubiti, Sara A.
    JOURNAL OF GRID COMPUTING, 2023, 21 (04)
  • [8] Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing
    Xiaohu Gao
    Mei Choo Ang
    Sara A. Althubiti
    Journal of Grid Computing, 2023, 21
  • [9] Deep Reinforcement Learning for Computation Rate Maximization in RIS-Enabled Mobile Edge Computing
    Xu, Jianpeng
    Ai, Bo
    Wu, Lina
    Zhang, Yaoyuan
    Wang, Weirong
    Li, Huiya
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10862 - 10866
  • [10] Secure Task Offloading in Blockchain-Enabled Mobile Edge Computing With Deep Reinforcement Learning
    Samy, Ahmed
    Elgendy, Ibrahim A.
    Yu, Haining
    Zhang, Weizhe
    Zhang, Hongli
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04): : 4872 - 4887