Task Offloading and Trajectory Optimization in UAV Networks: A Deep Reinforcement Learning Method Based on SAC and A-Star

被引:0
|
作者
Liu, Jianhua [1 ]
Xie, Peng [1 ]
Liu, Jiajia [1 ]
Tu, Xiaoguang [1 ]
机构
[1] Institute of Electronics and Electrical Engineering, Civil Aviation Flight University of China, Deyang,618307, China
来源
基金
中国博士后科学基金;
关键词
A-star - Actor critic - Aerial vehicle - Communications security - Edge computing - Energy-consumption - Soft actor-critic - Task offloading - Trajectory optimization - Unmanned aerial vehicle;
D O I
10.32604/cmes.2024.054002
中图分类号
学科分类号
摘要
In mobile edge computing, unmanned aerial vehicles (UAVs) equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility, flexibility, rapid deployment, and terrain agnosticism. These attributes enable UAVs to reach designated areas, thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable. However, the inherent broadcast nature of line-of-sight transmissionmethods employed byUAVs renders them vulnerable to eavesdropping attacks. Meanwhile, there are often obstacles that affect flight safety in real UAV operation areas, and collisions between UAVs may also occur. To solve these problems, we propose an innovative A*SAC deep reinforcement learning algorithm, which seamlessly integrates the benefits of Soft Actor-Critic (SAC) and A*(A-Star) algorithms. This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function. Furthermore, our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point. This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption, thereby optimizing system-level energy efficiency. Extensive simulation results demonstrate that, compared to other algorithms, our approach achieves superior system benefits. Specifically, it exhibits an average improvement of 13.18% in terms of different computing task sizes, 25.61% higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs, and 35.78% higher on average in terms of the maximum computing frequency of different auxiliary UAVs. As for path planning, the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV, enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments. © 2024 The Authors.
引用
收藏
页码:1243 / 1273
相关论文
共 50 条
  • [1] Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm Networks
    Nguyen, Anne Catherine
    Pamuklu, Turgay
    Syed, Aisha
    Kennedy, W. Sean
    Erol-Kantarci, Melike
    2022 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2022, : 270 - 275
  • [2] Deep Reinforcement Learning Assisted UAV Trajectory and Resource Optimization for NOMA Networks
    Chen, Peixin
    Zhao, Jian
    Shen, Furao
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 933 - 938
  • [3] A Deep Reinforcement Learning Based UAV Trajectory Planning Method For Integrated Sensing And Communications Networks
    Lin, Heyun
    Zhang, Zhihai
    Wei, Longkun
    Zhou, Zihao
    Zheng, Tian
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [4] Computing Over the Sky: Joint UAV Trajectory and Task Offloading Scheme Based on Optimization-Embedding Multi-Agent Deep Reinforcement Learning
    Li, Xuanheng
    Du, Xinyang
    Zhao, Nan
    Wang, Xianbin
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2024, 72 (03) : 1355 - 1369
  • [5] Research on task offloading optimization strategies for vehicular networks based on game theory and deep reinforcement learning
    Wang, Lei
    Zhou, Wenjiang
    Xu, Haitao
    Li, Liang
    Cai, Lei
    Zhou, Xianwei
    FRONTIERS IN PHYSICS, 2023, 11
  • [6] Task Offloading and Trajectory Control for UAV-Assisted Mobile Edge Computing Using Deep Reinforcement Learning
    Zhang, Lu
    Zhang, Zi-Yan
    Min, Luo
    Tang, Chao
    Zhang, Hong-Ying
    Wang, Ya-Hong
    Cai, Peng
    IEEE ACCESS, 2021, 9 : 53708 - 53719
  • [7] Task Offloading Optimization in Mobile Edge Computing based on Deep Reinforcement Learning
    Silva, Carlos
    Magaia, Naercio
    Grilo, Antonio
    PROCEEDINGS OF THE INT'L ACM CONFERENCE ON MODELING, ANALYSIS AND SIMULATION OF WIRELESS AND MOBILE SYSTEMS, MSWIM 2023, 2023, : 109 - 118
  • [8] Trajectory Optimization for 6G-UAV Based on Deep Reinforcement Learning
    Cui H.
    Zhang N.
    Liu P.
    IEEE Transactions on Vehicular Technology, 2024, 73 (11) : 1 - 5
  • [9] Task Offloading for UAV-based Mobile Edge Computing via Deep Reinforcement Learning
    Li, Jun
    Liu, Qian
    Wu, Pingyang
    Shu, Feng
    Jin, Shi
    2018 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2018, : 798 - 802
  • [10] Task offloading optimization of cruising UAV with fixed trajectory
    Liu, Peng
    He, Han
    Fu, Tingting
    Lu, Huijuan
    Alelaiwi, Abdulhameed
    Wasi, Md Wasif Islam
    COMPUTER NETWORKS, 2021, 199