Task offloading of edge computing network based on Lyapunov and deep reinforcement learning

被引:1
|
作者
Qiao, Xudong [1 ]
Zhou, Yongxin [2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei, Peoples R China
[2] Hainan Prov Fire Rescue Brigade, Informat & Commun Dept, Haikou, Hainan, Peoples R China
关键词
Task offloading; Edge computing; Lyapunov optimization; Deep Reinforcement Learning; INTERNET;
D O I
10.1109/ICCCS61882.2024.10603075
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Reinforcement learning based task offloading is a promising research direction in edge computing. This paper proposes a Deep Reinforcement Learning (DRL) Task Offloading framework (LyDDPG) based on Lyapunov optimization, which leverages the strengths of both Lyapunov optimization and DRL. LyDDPG aims to minimize device energy consumption and reduce queue backlog under long-term data queue stability and delay constraints by decoupling the original optimization problem into an independent slot task offloading optimization problem. A multi-user edge computing network with time-varying wireless channels and random user task data arriving in a sequence time range is considered in this experiment. The simulation results show that the LyDDPG algorithm minimizes the energy consumption and queue backlog under the condition of satisfying the long-term stability constraints. The framework improves the adaptability and performance of the system in a dynamic network environment, and provides an efficient way to solve the problem of task offloading and resource allocation.
引用
收藏
页码:1054 / 1059
页数:6
相关论文
共 50 条
  • [21] Towards Application-Driven Task Offloading in Edge Computing Based on Deep Reinforcement Learning
    Sun, Ming
    Bao, Tie
    Xie, Dan
    Lv, Hengyi
    Si, Guoliang
    MICROMACHINES, 2021, 12 (09)
  • [22] Task Offloading for UAV-based Mobile Edge Computing via Deep Reinforcement Learning
    Li, Jun
    Liu, Qian
    Wu, Pingyang
    Shu, Feng
    Jin, Shi
    2018 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2018, : 798 - 802
  • [23] Deep reinforcement learning-based online task offloading in mobile edge computing networks
    Wu, Haixing
    Geng, Jingwei
    Bai, Xiaojun
    Jin, Shunfu
    INFORMATION SCIENCES, 2024, 654
  • [24] Dependent Task-Offloading Strategy Based on Deep Reinforcement Learning in Mobile Edge Computing
    Gong, Bencan
    Jiang, Xiaowei
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2023, 2023
  • [25] Deep Reinforcement Learning-Based Task Offloading and Load Balancing for Vehicular Edge Computing
    Wu, Zhoupeng
    Jia, Zongpu
    Pang, Xiaoyan
    Zhao, Shan
    ELECTRONICS, 2024, 13 (08)
  • [26] Deep Reinforcement Learning Based Task Offloading Strategy Under Dynamic Pricing in Edge Computing
    Shi, Bing
    Chen, Feiyang
    Tang, Xing
    SERVICE-ORIENTED COMPUTING (ICSOC 2021), 2021, 13121 : 578 - 594
  • [27] Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT
    Tu, Youpeng
    Chen, Haiming
    Yan, Linjie
    Zhou, Xinyan
    FUTURE INTERNET, 2022, 14 (02):
  • [28] Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning
    Lu, Haifeng
    Gu, Chunhua
    Luo, Fei
    Ding, Weichao
    Liu, Xinping
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 102 : 847 - 861
  • [29] Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA
    Alfakih, Taha
    Hassan, Mohammad Mehedi
    Gumaei, Abdu
    Savaglio, Claudio
    Fortino, Giancarlo
    IEEE ACCESS, 2020, 8 : 54074 - 54084
  • [30] Federated Deep Reinforcement Learning-based task offloading system in edge computing environment
    Merakchi, Hiba
    Bagaa, Miloud
    Messaoud, Ahmed Ouameur
    Ksentini, Adlen
    Sehad, Abdenour
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5580 - 5586