Traffic Navigation for Urban Air Mobility with Reinforcement Learning

被引:0
|
作者
Lee, Jaeho [1 ]
Lee, Hohyeong [1 ]
Noh, Junyoung [1 ]
Bang, Hyochoong [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
来源
PROCEEDINGS OF THE 2021 ASIA-PACIFIC INTERNATIONAL SYMPOSIUM ON AEROSPACE TECHNOLOGY (APISAT 2021), VOL 2 | 2023年 / 913卷
关键词
Deep reinforcement learning; Multi-agent system; Traffic network; Urban Air Mobility (UAM); Proximal Policy Optimization (PPO); Soft Actor-Critic (SAC);
D O I
10.1007/978-981-19-2635-8_3
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Assuring stability of the guidance law for quadrotor-type Urban Air Mobility (UAM) is important since it is assumed to operate in urban areas. Model free reinforcement learning was intensively applied for this purpose in recent studies. In reinforcement learning, the environment is an important part of training. Usually, a Proximal Policy Optimization (PPO) algorithm is used widely for reinforcement learning of quadrotors. However, PPO algorithms for quadrotors tend to fail to guarantee the stability of the guidance law in the environment as the search space increases. In this work, we show the improvements of stability in a multi-agent quadrotor-type UAM environment by applying the Soft Actor-Critic (SAC) reinforcement learning algorithm. The simulations were performed in Unity. Our results achieved three times better reward in the Urban Air Mobility environment than when trained with the PPO algorithm and our approach also shows faster training time than the PPO algorithm.
引用
收藏
页码:31 / 42
页数:12
相关论文
共 50 条
  • [11] EcoMRL: Deep reinforcement learning-based traffic signal control for urban air quality
    Jung, Jaeeun
    Kim, Inhi
    Yoon, Jinwon
    INTERNATIONAL JOURNAL OF SUSTAINABLE TRANSPORTATION, 2024,
  • [12] Multi-Agent Deep Reinforcement Learning for Efficient Passenger Delivery in Urban Air Mobility
    Park, Chanyoung
    Park, Soohyun
    Kim, Gyu Seon
    Jung, Soyi
    Kim, Jae-Hyun
    Kim, Joongheon
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5689 - 5694
  • [13] Dynamic Spectrum Sharing in Cellular Based Urban Air Mobility via Deep Reinforcement Learning
    Han, Ruixuan
    Li, Hongxiang
    Knoblock, Eric J.
    Gasper, Michael R.
    Apaza, Rafael D.
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1332 - 1337
  • [14] Prescribing optimal health-aware operation for urban air mobility with deep reinforcement learning
    Montazeri, Mina
    Kulkarni, Chetan S.
    Fink, Olga
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2025, 259
  • [15] Intelligent Spectrum and Airspace Resource Management for Urban Air Mobility Using Deep Reinforcement Learning
    Apaza, Rafael D.
    Han, Ruixuan
    Li, Hongxiang
    Knoblock, Eric J.
    IEEE ACCESS, 2024, 12 : 164750 - 164766
  • [16] Learning-to-Fly RL: Reinforcement Learning-based Collision Avoidance for Scalable Urban Air Mobility
    Jang, Kuk
    Pant, Yash Vardhan
    Rodionova, Alena
    Mangharam, Rahul
    2020 AIAA/IEEE 39TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC) PROCEEDINGS, 2020,
  • [17] Deductive Reinforcement Learning for Visual Autonomous Urban Driving Navigation
    Huang, Changxin
    Zhang, Ronghui
    Ouyang, Meizi
    Wei, Pengxu
    Lin, Junfan
    Su, Jiang
    Lin, Liang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5379 - 5391
  • [18] Minimum-Violation Traffic Management for Urban Air Mobility
    Bharadwaj, Suda
    Wongpiromsarn, Tichakorn
    Neogi, Natasha
    Muffoletto, Joseph
    Topcu, Ufuk
    NASA FORMAL METHODS (NFM 2021), 2021, 12673 : 37 - 52
  • [19] Autonomous Conflict Resolution in Urban Air Mobility: A Deep Multi-Agent Reinforcement Learning Approach
    Deniz, Sabrullah
    Wang, Zhenbo
    AIAA AVIATION FORUM AND ASCEND 2024, 2024,
  • [20] Combined MPC and reinforcement learning for traffic signal control in urban traffic networks
    Remmerswaall, Willemijn
    Sun, Dingshan
    Jamshidnejad, Anahita
    De Schutter, Bart
    2022 26TH INTERNATIONAL CONFERENCE ON SYSTEM THEORY, CONTROL AND COMPUTING (ICSTCC), 2022, : 432 - 439