Traffic Navigation for Urban Air Mobility with Reinforcement Learning

被引:0
|
作者
Lee, Jaeho [1 ]
Lee, Hohyeong [1 ]
Noh, Junyoung [1 ]
Bang, Hyochoong [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
关键词
Deep reinforcement learning; Multi-agent system; Traffic network; Urban Air Mobility (UAM); Proximal Policy Optimization (PPO); Soft Actor-Critic (SAC);
D O I
10.1007/978-981-19-2635-8_3
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Assuring stability of the guidance law for quadrotor-type Urban Air Mobility (UAM) is important since it is assumed to operate in urban areas. Model free reinforcement learning was intensively applied for this purpose in recent studies. In reinforcement learning, the environment is an important part of training. Usually, a Proximal Policy Optimization (PPO) algorithm is used widely for reinforcement learning of quadrotors. However, PPO algorithms for quadrotors tend to fail to guarantee the stability of the guidance law in the environment as the search space increases. In this work, we show the improvements of stability in a multi-agent quadrotor-type UAM environment by applying the Soft Actor-Critic (SAC) reinforcement learning algorithm. The simulations were performed in Unity. Our results achieved three times better reward in the Urban Air Mobility environment than when trained with the PPO algorithm and our approach also shows faster training time than the PPO algorithm.
引用
收藏
页码:31 / 42
页数:12
相关论文
共 50 条
  • [1] Reinforcement Learning-Based Flow Management Techniques for Urban Air Mobility and Dense Low-Altitude Air Traffic Operations
    Xie, Yibing
    Gardi, Alessandro
    Sabatini, Roberto
    2021 IEEE/AIAA 40TH DIGITAL AVIONICS SYSTEMS CONFERENCE (DASC), 2021,
  • [2] Traffic Management for Urban Air Mobility
    Bharadwaj, Suda
    Carr, Steven
    Neogi, Natasha
    Poonawala, Hasan
    Chueca, Alejandro Barberia
    Topcu, Ufuk
    NASA FORMAL METHODS (NFM 2019), 2019, 11460 : 71 - 87
  • [3] Air Traffic Assignment for Intensive Urban Air Mobility Operations
    Wang, Zhengyi
    Delahaye, Daniel
    Farges, Jean-Loup
    Alam, Sameer
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2021, 18 (11): : 860 - 875
  • [4] Adapting air traffic control for drones and urban air mobility
    Thipphavong, David
    AEROSPACE AMERICA, 2019, 57 (11) : 32 - 32
  • [5] Deep Reinforcement Learning Assisted Spectrum Management in Cellular Based Urban Air Mobility
    Han, Ruixuan
    Li, Hongxiang
    Apaza, Rafael
    Knoblock, Eric
    Gasper, Michael
    IEEE WIRELESS COMMUNICATIONS, 2022, 29 (06) : 14 - 21
  • [6] An Integration visual navigation algorithm for urban air mobility
    Li, Yandong
    Jiang, Bo
    Zeng, Long
    Li, Chenglong
    BIG DATA RESEARCH, 2024, 36
  • [7] Fast Decision Support for Air Traffic Management at Urban Air Mobility Vertiports using Graph Learning
    KrisshnaKumar, Prajit
    Witter, Jhoel
    Paul, Steve
    Cho, Hanvit
    Dantu, Karthik
    Chowdhury, Souma
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1580 - 1585
  • [8] A Traffic Demand Analysis Method for Urban Air Mobility
    Bulusu, Vishwanath
    Onat, Emin Burak
    Sengupta, Raja
    Yedavalli, Pavan
    Macfarlane, Jane
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (09) : 6039 - 6047
  • [9] Preliminary Concept of Urban Air Mobility Traffic Rules
    Qu, Wenqiu
    Xu, Chenchen
    Tan, Xiang
    Tang, Anqi
    He, Hongbo
    Liao, Xiaohan
    DRONES, 2023, 7 (01)
  • [10] Decentralized Control Synthesis for Air Traffic Management in Urban Air Mobility
    Bharadwaj, Suda
    Carr, Steven
    Neogi, Natasha
    Topcu, Ufuk
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2021, 8 (02): : 598 - 608