Multi-Agent Deep Reinforcement Learning Based Optimizing Joint 3D Trajectories and Phase Shifts in RIS-Assisted UAV-Enabled Wireless Communications

被引:0
|
作者
Tesfaw, Belayneh Abebe [1 ]
Juang, Rong-Terng [2 ]
Lin, Hsin-Piao [1 ]
Tarekegn, Getaneh Berie [3 ]
Kabore, Wendenda Nathanael [4 ]
机构
[1] National Taipei University of Technology, Department of Electrical Engineering and Computer Science, Taipei,10608, Taiwan
[2] National Taipei University of Technology, Institute of Space and System Engineering, Taipei,10608, Taiwan
[3] NYCU, Department of Electrical and Computer Engineering, Hsinchu,30010, Taiwan
[4] National Taipei University of Technology, Department of Electronic Engineering, Taipei,10608, Taiwan
关键词
Deep reinforcement learning;
D O I
10.1109/OJVT.2024.3486197
中图分类号
学科分类号
摘要
Unmanned aerial vehicles (UAVs) serve as airborne access points or base stations, delivering network services to the Internet of Things devices (IoTDs) in areas with compromised or absent infrastructure. However, urban obstacles like trees and high buildings can obstruct the connection between UAVs and IoTDs, leading to degraded communication performance. High altitudes can also result in significant path losses. To address these challenges, this paper introduces the deployment of reconfigurable intelligent surfaces (RISs) that smartly reflect signals to improve communication quality. It proposes a method to jointly optimize the 3D trajectory of the UAV and the phase shifts of the RIS to maximize communication coverage and ensure satisfactory average achievable data rates for RIS-assisted UAV-enabled wireless communications by considering mobile multi-user scenarios. In this paper, a multi-agent double-deep Q-network (MADDQN) algorithm is presented, which each agent dynamically adjusts either the positioning of the UAV or the phase shifts of the RIS. Agents learn to collaborate with each other by sharing the same reward to achieve a common goal. In the simulation, results demonstrate that the proposed method significantly outperforms baseline strategies in terms of improving communication coverage and average achievable data rates. The proposed method achieves 98.6% of a communication coverage score, while IoTDs are guaranteed to have acceptable achievable data rates. © 2020 IEEE.
引用
收藏
页码:1712 / 1726
相关论文
共 35 条
  • [31] Federated Multi-Agent Deep Reinforcement Learning for Dynamic and Flexible 3D Operation of 5G Multi-MAP Networks
    Catte, Esteban
    Sana, Mohamed
    Maman, Mickael
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [32] Hybrid Multi-Agent Deep Reinforcement Learning for Active-IRS-Based Rate Maximization Over 6G UAV Mobile Wireless Networks
    Yi, Shuming
    Wang, Fei
    Zhang, Xi
    MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2023,
  • [33] Deep-Reinforcement-Learning-Based Joint 3-D Navigation and Phase-Shift Control for Mobile Internet of Vehicles Assisted by RIS-Equipped UAVs
    Eskandari, Mohsen
    Savkin, Andrey V.
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (20) : 18054 - 18066
  • [34] Joint optimization for service-caching, computation-offloading, and UAVs flight trajectories over rechargeable UAV-aided MEC using hierarchical multi-agent deep reinforcement learning
    Chen, Zhian
    Wang, Fei
    Wang, Jiaojie
    VEHICULAR COMMUNICATIONS, 2024, 50
  • [35] Optimization for Master-UAV-Powered Auxiliary-Aerial-IRS-Assisted IoT Networks: An Option-Based Multi-Agent Hierarchical Deep Reinforcement Learning Approach
    Xu, Jingren
    Kang, Xin
    Zhang, Ronghaixiang
    Liang, Ying-Chang
    Sun, Sumei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (22): : 22887 - 22902