Task Offloading and Trajectory Optimization in UAV Networks: A Deep Reinforcement Learning Method Based on SAC and A-Star

被引:0
|
作者
Liu, Jianhua [1 ]
Xie, Peng [1 ]
Liu, Jiajia [1 ]
Tu, Xiaoguang [1 ]
机构
[1] Institute of Electronics and Electrical Engineering, Civil Aviation Flight University of China, Deyang,618307, China
来源
基金
中国博士后科学基金;
关键词
A-star - Actor critic - Aerial vehicle - Communications security - Edge computing - Energy-consumption - Soft actor-critic - Task offloading - Trajectory optimization - Unmanned aerial vehicle;
D O I
10.32604/cmes.2024.054002
中图分类号
学科分类号
摘要
In mobile edge computing, unmanned aerial vehicles (UAVs) equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility, flexibility, rapid deployment, and terrain agnosticism. These attributes enable UAVs to reach designated areas, thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable. However, the inherent broadcast nature of line-of-sight transmissionmethods employed byUAVs renders them vulnerable to eavesdropping attacks. Meanwhile, there are often obstacles that affect flight safety in real UAV operation areas, and collisions between UAVs may also occur. To solve these problems, we propose an innovative A*SAC deep reinforcement learning algorithm, which seamlessly integrates the benefits of Soft Actor-Critic (SAC) and A*(A-Star) algorithms. This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function. Furthermore, our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point. This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption, thereby optimizing system-level energy efficiency. Extensive simulation results demonstrate that, compared to other algorithms, our approach achieves superior system benefits. Specifically, it exhibits an average improvement of 13.18% in terms of different computing task sizes, 25.61% higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs, and 35.78% higher on average in terms of the maximum computing frequency of different auxiliary UAVs. As for path planning, the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV, enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments. © 2024 The Authors.
引用
收藏
页码:1243 / 1273
相关论文
共 50 条
  • [41] Deep Reinforcement Learning for Task Offloading and Power Allocation in UAV-Assisted MEC System
    Zhao, Nan
    Ren, Fan
    Du, Wei
    Ye, Zhiyang
    INTERNATIONAL JOURNAL OF MOBILE COMPUTING AND MULTIMEDIA COMMUNICATIONS, 2021, 12 (04) : 32 - 51
  • [42] Deep Reinforcement Learning for Scheduling and Offloading in UAV-Assisted Mobile Edge Networks
    Tian X.
    Miao P.
    Zhang L.
    Wireless Communications and Mobile Computing, 2023, 2023
  • [43] Parked Vehicles Assisted Task Offloading Based on Deep Reinforcement Learning
    Zeng, Feng (fengzeng@csu.edu.cn), 1600, CEUR-WS (3748):
  • [44] UAV Trajectory Optimization for Directional THz Links Using Deep Reinforcement Learning
    Dabiri, Mohammad Taghi
    Hasn, Mazen
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [45] Reentry trajectory optimization based on Deep Reinforcement Learning
    Gao, Jiashi
    Shi, Xinming
    Cheng, Zhongtao
    Xiong, Jizhang
    Liu, Lei
    Wang, Yongji
    Yang, Ye
    PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, : 2588 - 2592
  • [46] Deep reinforcement learning-based joint task and energy offloading in UAV-aided 6G intelligent edge networks
    Cheng, Zhipeng
    Liwang, Minghui
    Chen, Ning
    Huang, Lianfen
    Du, Xiaojiang
    Guizani, Mohsen
    COMPUTER COMMUNICATIONS, 2022, 192 : 234 - 244
  • [47] Trajectory Design and Generalization for UAV Enabled Networks:A Deep Reinforcement Learning Approach
    Li, Xuan
    Wang, Qiang
    Liu, Jie
    Zhang, Wenqi
    2020 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2020,
  • [48] Deep Reinforcement Learning for Real-Time Trajectory Planning in UAV Networks
    Li, Kai
    Ni, Wei
    Tovar, Eduardo
    Guizani, Mohsen
    2020 16TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC, 2020, : 958 - 963
  • [49] Computation Offloading Based on Deep Reinforcement Learning for UAV-MEC Network
    Wan, Zheng
    Luo, Yuxuan
    Dong, Xiaogang
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT IV, 2024, 14490 : 265 - 276
  • [50] Deep reinforcement learning based trajectory design and resource allocation for task-aware multi-UAV enabled MEC networks
    Li, Zewu
    Xu, Chen
    Zhang, Zhanpeng
    Wu, Runze
    COMPUTER COMMUNICATIONS, 2024, 213 : 88 - 98