A Hybrid Deep Reinforcement Learning Approach for Dynamic Task Offloading in NOMA-MEC System

被引:1
|
作者
Shang, Ce [1 ]
Sun, Yan [1 ]
Luo, Hong [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Dept Comp Sci, Beijing, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Mobile edge computing (MEC); non-orthogonal multiple access (NOMA); deep reinforcement learning (DRL); RESOURCE-ALLOCATION; EDGE; OPTIMIZATION; NETWORKS;
D O I
10.1109/SECON55815.2022.9918560
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Mobile edge computing (MEC) has been regarded as a promising paradigm for increasing the computing capacity of mobile devices (MDs) by offloading tasks to edge servers. Non-orthogonal multiple access (NOMA) is a critical multiple access technique that allows many MDs to transmit on the same resource block simultaneously. Attracted by the enormous benefits of combining NOMA and MEC, we investigate the dynamic computation offloading in a multi-device multi-server NOMA-MEC system. We consider the partial offloading policy such that MDs can offload a portion of the task to the edge server for execution. To minimize the overall computation delay and energy consumption, we formulate a mixed integer programming (MIP) problem to jointly optimize edge server selection and offloading task ratio. Solving the optimization problem with a discrete-continuous hybrid action space is not straightforward since most existing deep reinforcement learning (DRL) algorithms are only applicable to discrete or continuous action spaces. We present the hybrid advantage actor-critic (HA2C) approach, which employs an actor-critic architecture consisting of two parallel actor networks and a critic network, to tackle this problem. Specifically, the discrete actor and continuous actor networks based on deep neural networks (DNNs) determine MEC server selection and offloading ratio, respectively. The critic network evaluates the current state value, and the advantage function is computed for the discrete and continuous network parameter updates. Experimental results show that the proposed algorithm is superior to conventional DRL algorithms that convert the hybrid action space into a unified homogeneous action space.
引用
收藏
页码:434 / 442
页数:9
相关论文
共 50 条
  • [1] Computation Offloading and Resource Allocation in NOMA-MEC: A Deep Reinforcement Learning Approach
    Shang, Ce
    Sun, Yan
    Luo, Hong
    Guizani, Mohsen
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15464 - 15476
  • [2] Double RISs assisted task offloading for NOMA-MEC with action-constrained deep reinforcement learning
    Fang, Junli
    Lu, Baoshan
    Hong, Xuemin
    Shi, Jianghong
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 284
  • [3] Age of Information Analysis of NOMA-MEC Offloading with Dynamic Task Arrivals
    Liu, Lei
    Qiang, Jingzhou
    Wang, Yanting
    Jiang, Fan
    [J]. 2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 181 - 186
  • [4] Latency optimization of task offloading in NOMA-MEC systems
    Wang, Fangya
    Ren, Mengmeng
    Yang, Long
    He, Bingtao
    Zhou, Yuchen
    [J]. IET COMMUNICATIONS, 2023, 17 (05) : 591 - 602
  • [5] Minimizing Task Offloading Delay in NOMA-MEC Wireless Systems
    Irum, Tayyaba
    Ejaz, Muhammad Usman
    Elkashlan, Maged
    [J]. 2022 IEEE 4TH GLOBAL POWER, ENERGY AND COMMUNICATION CONFERENCE (IEEE GPECOM2022), 2022, : 632 - 637
  • [6] Delay Minimization for NOMA-MEC Offloading
    Ding, Zhiguo
    Ng, Derrick Wing Kwan
    Schober, Robert
    Poor, H. Vincent
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (12) : 1875 - 1879
  • [7] A Novel Deep Reinforcement Learning Approach for Task Offloading in MEC Systems
    Liu, Xiaowei
    Jiang, Shuwen
    Wu, Yi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [8] Latency Optimization for Multi-user NOMA-MEC Offloading Using Reinforcement Learning
    Yang, Peitong
    Li, Lixin
    Liang, Wei
    Zhang, Huisheng
    Ding, Zhiguo
    [J]. 2019 28TH WIRELESS AND OPTICAL COMMUNICATIONS CONFERENCE (WOCC), 2019, : 276 - 280
  • [9] Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things
    Lin, Lixia
    Zhou, Wen'an
    Yang, Zhicheng
    Liu, Jianlong
    [J]. PEER-TO-PEER NETWORKING AND APPLICATIONS, 2023, 16 (01) : 170 - 188
  • [10] Deep reinforcement learning-based task scheduling and resource allocation for NOMA-MEC in Industrial Internet of Things
    Lixia Lin
    Wen’an Zhou
    Zhicheng Yang
    Jianlong Liu
    [J]. Peer-to-Peer Networking and Applications, 2023, 16 : 170 - 188