Deep Reinforcement Learning for Multi-Hop Offloading in UAV-Assisted Edge Computing

被引:7
|
作者
Nguyen Tien Hoa [1 ]
Do Van Dai [1 ]
Le Hoang Lan [1 ]
Nguyen Cong Luong [2 ]
Duc Van Le [3 ]
Niyato, Dusit [3 ]
机构
[1] Hanoi Univ Sci & Technol, Sch Elect & Elect Engn, Hanoi 100000, Vietnam
[2] Phenikaa Univ, Fac Comp Sci, Hanoi 12116, Vietnam
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
新加坡国家研究基金会;
关键词
Deep reinforcement learning; edge computing; multi-hop; offloading; UAV; RESOURCE-ALLOCATION; TRAJECTORY OPTIMIZATION; JOINT RESOURCE;
D O I
10.1109/TVT.2023.3292815
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In this article, we propose a unmanned aerial vehicle (UAV)assisted multi-hop edge computing (UAV-assisted MEC) system in which a UE can offload its task to multiple UAVs in a multi-hop fashion. In particular, the UE offloads a task to its nearby UAV, and this UAV can execute a part of the received task and offload the remaining part to its neighboring UAV. The offloading process continues until the task execution is finished. The benefit of this multihop offloading is that the task execution can be finished faster, and the computing load can be shared among multiple UAVs, thus avoiding overloading and congestion. Each node, i.e., the UE or the UAV, needs to determine the task size for offloading to minimize the cumulative energy consumption and latency over the nodes. We formulate a stochastic optimization problem under the dynamics and uncertainty of the UAV-assistedMECsystem. Then, we propose a deep reinforcement learning (DRL) algorithm to solve this problem. Simulation results are provided to demonstrate the effectiveness of the DRL algorithm.
引用
收藏
页码:16917 / 16922
页数:6
相关论文
共 50 条
  • [1] Deep Reinforcement Learning Based Computation Offloading in UAV-Assisted Edge Computing
    Zhang, Peiying
    Su, Yu
    Li, Boxiao
    Liu, Lei
    Wang, Cong
    Zhang, Wei
    Tan, Lizhuang
    [J]. DRONES, 2023, 7 (03)
  • [2] Multi-Agent Deep Reinforcement Learning for Task Offloading in UAV-Assisted Mobile Edge Computing
    Zhao, Nan
    Ye, Zhiyang
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) : 6949 - 6960
  • [3] Multi-objective deep reinforcement learning for computation offloading in UAV-assisted multi-access edge computing ✩
    Liu, Xu
    Chai, Zheng-Yi
    Li, Ya-Lun
    Cheng, Yan-Yang
    Zeng, Yue
    [J]. INFORMATION SCIENCES, 2023, 642
  • [4] Computation Offloading and Trajectory Control for UAV-Assisted Edge Computing Using Deep Reinforcement Learning
    Qi, Huamei
    Zhou, Zheng
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [5] Deep Reinforcement Learning Driven UAV-Assisted Edge Computing
    Zhang, Liang
    Jabbari, Bijan
    Ansari, Nirwan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (24) : 25449 - 25459
  • [6] Task Offloading and Trajectory Control for UAV-Assisted Mobile Edge Computing Using Deep Reinforcement Learning
    Zhang, Lu
    Zhang, Zi-Yan
    Min, Luo
    Tang, Chao
    Zhang, Hong-Ying
    Wang, Ya-Hong
    Cai, Peng
    [J]. IEEE ACCESS, 2021, 9 : 53708 - 53719
  • [7] Edge Computing Task Offloading Optimization for a UAV-Assisted Internet of Vehicles via Deep Reinforcement Learning
    Yan, Ming
    Xiong, Rui
    Wang, Yan
    Li, Chunguo
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (04) : 5647 - 5658
  • [8] Deep-Reinforcement-Learning-Based Computation Offloading in UAV-Assisted Vehicular Edge Computing Networks
    Yan, Junjie
    Zhao, Xiaohui
    Li, Zan
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (11): : 19882 - 19897
  • [9] Deep Reinforcement Learning for Scheduling and Offloading in UAV-Assisted Mobile Edge Networks
    Tian X.
    Miao P.
    Zhang L.
    [J]. Wireless Communications and Mobile Computing, 2023, 2023
  • [10] Deep Reinforcement Learning Approach for UAV-Assisted Mobile Edge Computing Networks
    Hwang, Sangwon
    Park, Juseong
    Lee, Hoon
    Kim, Mintae
    Lee, Inkyu
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3839 - 3844