Graph Meta-Reinforcement Learning for Transferable Autonomous Mobility-on-Demand

被引:5
|
作者
Gammelli, Daniele [1 ]
Yang, Kaidi [2 ,4 ]
Harrison, James [3 ]
Rodrigues, Filipe [1 ]
Pereira, Francisco [1 ]
Pavone, Marco [2 ]
机构
[1] Tech Univ Denmark, Lyngby, Denmark
[2] Stanford Univ, Stanford, CA 94305 USA
[3] Google Res, Brain Team, San Francisco, CA USA
[4] Natl Univ Singapore, Singapore, Singapore
基金
瑞士国家科学基金会; 美国国家科学基金会;
关键词
Autonomous Mobility-on-Demand; Meta-learning; Reinforcement learning; Graph Neural Networks;
D O I
10.1145/3534678.3539180
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous Mobility-on-Demand (AMoD) systems represent an attractive alternative to existing transportation paradigms, currently challenged by urbanization and increasing travel needs. By centrally controlling a fleet of self-driving vehicles, these systems provide mobility service to customers and are currently starting to be deployed in a number of cities around the world. Current learning-based approaches for controlling AMoD systems are limited to the single-city scenario, whereby the service operator is allowed to take an unlimited amount of operational decisions within the same transportation system. However, real-world system operators can hardly afford to fully re-train AMoD controllers for every city they operate in, as this could result in a high number of poorquality decisions during training, making the single-city strategy a potentially impractical solution. To address these limitations, we propose to formalize the multi-city AMoD problem through the lens of meta-reinforcement learning (meta-RL) and devise an actorcritic algorithm based on recurrent graph neural networks. In our approach, AMoD controllers are explicitly trained such that a small amount of experience within a new city will produce good system performance. Empirically, we show how control policies learned through meta-RL are able to achieve near-optimal performance on unseen cities by learning rapidly adaptable policies, thus making them more robust not only to novel environments, but also to distribution shifts common in real-world operations, such as special events, unexpected congestion, and dynamic pricing schemes.
引用
收藏
页码:2913 / 2923
页数:11
相关论文
共 50 条
  • [1] Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems
    Gammelli, Daniele
    Yang, Kaidi
    Harrison, James
    Rodrigues, Filipe
    Pereira, Francisco C.
    Pavone, Marco
    [J]. 2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2996 - 3003
  • [2] SAMoD: Shared Autonomous Mobility-on-Demand using Decentralized Reinforcement Learning
    Gueriau, Maxime
    Dusparic, Ivana
    [J]. 2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 1558 - 1563
  • [3] Deep Reinforcement Learning-Based Charging Pricing for Autonomous Mobility-on-Demand System
    Lu, Ying
    Liang, Yanchang
    Ding, Zhaohao
    Wu, Qiuwei
    Ding, Tao
    Lee, Wei-Jen
    [J]. IEEE TRANSACTIONS ON SMART GRID, 2022, 13 (02) : 1412 - 1426
  • [4] Intermodal Autonomous Mobility-on-Demand
    Salazar, Mauro
    Lanzetti, Nicolas
    Rossi, Federico
    Schiffer, Maximilian
    Pavone, Marco
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2020, 21 (09) : 3946 - 3960
  • [5] Rebalancing Shared Mobility-on-Demand Systems: a Reinforcement Learning Approach
    Wen, Jian
    Zhao, Jinhua
    Jaillet, Patrick
    [J]. 2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2017,
  • [6] A Deep Reinforcement Learning Approach to Ride-Sharing Vehicle Dispatching in Autonomous Mobility-on-Demand Systems
    Guo, Ge
    Xu, Yangguang
    [J]. IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2022, 14 (01) : 128 - 140
  • [7] Meta-Reinforcement Learning-Based Transferable Scheduling Strategy for Energy Management
    Xiong, Luolin
    Tang, Yang
    Liu, Chensheng
    Mao, Shuai
    Meng, Ke
    Dong, Zhaoyang
    Qian, Feng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (04) : 1685 - 1695
  • [8] Value of demand information in autonomous mobility-on-demand systems
    Wen, Jian
    Nassir, Neema
    Zhao, Jinhua
    [J]. TRANSPORTATION RESEARCH PART A-POLICY AND PRACTICE, 2019, 121 : 346 - 359
  • [9] Robust Electric Vehicle Balancing of Autonomous Mobility-on-Demand System: A Multi-Agent Reinforcement Learning Approach
    He, Sihong
    Han, Shuo
    Miao, Fei
    [J]. 2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2023, : 5471 - 5478
  • [10] Analysis and Control of Autonomous Mobility-on-Demand Systems
    Zardini, Gioele
    Lanzetti, Nicolas
    Pavone, Marco
    Frazzoli, Emilio
    [J]. ANNUAL REVIEW OF CONTROL ROBOTICS AND AUTONOMOUS SYSTEMS, 2022, 5 : 633 - 658