Deep reinforcement learning in NOMA-assisted UAV networks for path selection and resource offloading

被引:5
|
作者
Yang, Xincheng [1 ,2 ]
Qin, Danyang [1 ,2 ]
Liu, Jiping [1 ,2 ]
Li, Yue [1 ]
Zhu, Yong [1 ]
Ma, Lin [3 ]
机构
[1] Heilongjiang Univ, Sch Elect Engn, Harbin 150080, Peoples R China
[2] Southeast Univ, Natl Mobile Commun Res Lab, Nanjing, Peoples R China
[3] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep Q-network (DQN); Non-orthogonal multiple access (NOMA); Unmanned aerial vehicle (UAV); Artificial potential field (APF); Path selection; Resource offloading; 5G;
D O I
10.1016/j.adhoc.2023.103285
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper constructs a NOMA-based UAV-assisted Cellular Offloading (UACO) framework and designs a UAV path selection and resource offloading algorithm (UPRA) based on deep reinforcement learning. This paper focuses on the coupling relationship between path selection and resource offloading during the movement of UAVs. A joint optimization problem between UAV three-dimensional path design and resource offloading is proposed, considering the UAV's autonomous obstacle avoidance in complex environments and the influence of obstacles in 3D space on the channel model. In particular, a constrained clustering-assignment algorithm is designed by improving the K-means algorithm and combining it with the assignment algorithm in order to achieve periodic clustering of random motion users and UAV task assignment. In addition, a semi fixed hierarchical power allocation algorithm is embedded in the designed DQN algorithm to improve the convergence performance of the learning algorithm in this paper. Simulation results show that: the NOMAbased design is able to improve the spectrum utilization efficiency and communication throughput of the UAV network system. Compared with the artificial potential field, the proposed algorithm is able to solve the problem of falling into suboptimal solutions in path selection and improve the communication throughput. In addition, this paper explores the impact of the reward function on the training convergence and the results in deep reinforcement learning. The excellent adaptability of the designed algorithm in dynamic networks as well as in complex environments is demonstrated by random deployment of users and varying the maximum user movement speed.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Intelligent Offloading for NOMA-Assisted MEC via Dual Connectivity
    Li, Changxiang
    Wang, Hong
    Song, Rongfang
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) : 2802 - 2813
  • [32] Deep-Learning-Based Resource Allocation for 6G NOMA-Assisted Backscatter Communications
    Tuong, Van Dat
    Cho, Sungrae
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 32234 - 32243
  • [33] Deep Reinforcement Learning Based Computation Offloading in UAV-Assisted Edge Computing
    Zhang, Peiying
    Su, Yu
    Li, Boxiao
    Liu, Lei
    Wang, Cong
    Zhang, Wei
    Tan, Lizhuang
    DRONES, 2023, 7 (03)
  • [34] Joint Resource Allocation and Reliability Maximization in NOMA-Assisted Cooperative URLLC Networks
    Yuan, Xiaopeng
    Li, Boyao
    Zhu, Yao
    Hu, Yulin
    Schmeink, Anke
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [35] Deep Reinforcement Learning-Based Resource Allocation in Cooperative UAV-Assisted Wireless Networks
    Luong, Phuong
    Gagnon, Francois
    Tran, Le-Nam
    Labeau, Fabrice
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2021, 20 (11) : 7610 - 7625
  • [36] Throughput Maximization in NOMA Enhanced RIS-Assisted Multi-UAV Networks: A Deep Reinforcement Learning Approach
    Tang, Runzhi
    Wang, Junxuan
    Zhang, Yanyan
    Jiang, Fan
    Zhang, Xuewei
    Du, Jianbo
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (01) : 730 - 745
  • [37] Deep Reinforcement Learning for Task Offloading in UAV-Aided Smart Farm Networks
    Nguyen, Anne Catherine
    Pamuklu, Turgay
    Syed, Aisha
    Kennedy, W. Sean
    Erol-Kantarci, Melike
    2022 IEEE FUTURE NETWORKS WORLD FORUM, FNWF, 2022, : 270 - 275
  • [38] Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks
    Liu, Yi
    Yu, Huimin
    Xie, Shengli
    Zhang, Yan
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (11) : 11158 - 11168
  • [39] Asynchronous Federated Deep-Reinforcement-Learning-Based Dependency Task Offloading for UAV-Assisted Vehicular Networks
    Shen, Si
    Shen, Guojiang
    Dai, Zhehao
    Zhang, Kaiyu
    Kong, Xiangjie
    Li, Jianxin
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (19): : 31561 - 31574
  • [40] AI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?
    Zhong, Ruikang
    Liu, Yuanwei
    Mu, Xidong
    Chen, Yue
    Song, Lingyang
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (01) : 182 - 196