Task offloading of edge computing network based on Lyapunov and deep reinforcement learning

被引:1
|
作者
Qiao, Xudong [1 ]
Zhou, Yongxin [2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei, Peoples R China
[2] Hainan Prov Fire Rescue Brigade, Informat & Commun Dept, Haikou, Hainan, Peoples R China
关键词
Task offloading; Edge computing; Lyapunov optimization; Deep Reinforcement Learning; INTERNET;
D O I
10.1109/ICCCS61882.2024.10603075
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Reinforcement learning based task offloading is a promising research direction in edge computing. This paper proposes a Deep Reinforcement Learning (DRL) Task Offloading framework (LyDDPG) based on Lyapunov optimization, which leverages the strengths of both Lyapunov optimization and DRL. LyDDPG aims to minimize device energy consumption and reduce queue backlog under long-term data queue stability and delay constraints by decoupling the original optimization problem into an independent slot task offloading optimization problem. A multi-user edge computing network with time-varying wireless channels and random user task data arriving in a sequence time range is considered in this experiment. The simulation results show that the LyDDPG algorithm minimizes the energy consumption and queue backlog under the condition of satisfying the long-term stability constraints. The framework improves the adaptability and performance of the system in a dynamic network environment, and provides an efficient way to solve the problem of task offloading and resource allocation.
引用
收藏
页码:1054 / 1059
页数:6
相关论文
共 50 条
  • [41] Task Offloading With Service Migration for Satellite Edge Computing: A Deep Reinforcement Learning Approach
    Wu, Haonan
    Yang, Xiumei
    Bu, Zhiyong
    IEEE ACCESS, 2024, 12 : 25844 - 25856
  • [42] Deep Reinforcement Learning-Guided Task Reverse Offloading in Vehicular Edge Computing
    Gu, Anqi
    Wu, Huaming
    Tang, Huijun
    Tang, Chaogang
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2200 - 2205
  • [43] Adaptive Prioritization and Task Offloading in Vehicular Edge Computing Through Deep Reinforcement Learning
    Uddin, Ashab
    Sakr, Ahmed Hamdi
    Zhang, Ning
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) : 5038 - 5052
  • [44] Deep Multiagent Reinforcement Learning for Task Offloading and Resource Allocation in Satellite Edge Computing
    Jia, Min
    Zhang, Liang
    Wu, Jian
    Guo, Qing
    Zhang, Guowei
    Gu, Xuemai
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (04): : 3832 - 3845
  • [45] Deep Reinforcement Learning Based on Parked Vehicles-Assisted for Task Offloading in Vehicle Edge Computing
    Wang, Bingxin
    Liu, Lei
    Wang, Jie
    2023 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2023, : 438 - 443
  • [46] Deep Reinforcement Learning based Reliability-aware Resource Placement and Task Offloading in Edge Computing
    Liang, Jingyu
    Feng, Zihan
    Gao, Han
    Chen, Ying
    Huang, Jiwei
    Truong, Hong-Linh
    2024 IEEE INTERNATIONAL CONFERENCE ON WEB SERVICES, ICWS 2024, 2024, : 697 - 706
  • [47] Deep Reinforcement Learning and Markov Decision Problem for Task Offloading in Mobile Edge Computing
    Xiaohu Gao
    Mei Choo Ang
    Sara A. Althubiti
    Journal of Grid Computing, 2023, 21
  • [48] Cooperative Task Offloading for Mobile Edge Computing Based on Multi-Agent Deep Reinforcement Learning
    Yang, Jian
    Yuan, Qifeng
    Chen, Shuangwu
    He, Huasen
    Jiang, Xiaofeng
    Tan, Xiaobin
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (03): : 3205 - 3219
  • [49] Dependency-aware task offloading based on deep reinforcement learning in mobile edge computing networks
    Li, Junnan
    Yang, Zhengyi
    Chen, Kai
    Ming, Zhao
    Li, Xiuhua
    Fan, Qilin
    Hao, Jinlong
    Cheng, Luxi
    WIRELESS NETWORKS, 2024, 30 (06) : 5519 - 5531
  • [50] Vehicle Edge Computing Task Offloading Strategy Based on Multi-Agent Deep Reinforcement Learning
    Bo, Jianxiong
    Zhao, Xu
    JOURNAL OF GRID COMPUTING, 2025, 23 (02)