Model-Based Graph Reinforcement Learning for Inductive Traffic Signal Control

被引:3
|
作者
Devailly, Francois-Xavier [1 ]
Larocque, Denis [1 ]
Charlin, Laurent [1 ]
机构
[1] HEC Montreal, Dept Decis Sci, Montreal, PQ H3T 2A7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Adaptive traffic signal control; transfer learning; multi-agent reinforcement learning; joint action modeling; model-based reinforcement learning; graph neural networks; NETWORK; GO;
D O I
10.1109/OJITS.2024.3376583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce MuJAM, an adaptive traffic signal control method which leverages model-based reinforcement learning to 1) extend recent generalization efforts (to road network architectures and traffic distributions) further by allowing a generalization to the controllers' constraints (cyclic and acyclic policies), 2) improve performance and data efficiency over related model-free approaches, and 3) enable explicit coordination at scale for the first time. In a zero-shot transfer setting involving both road networks and traffic settings never experienced during training, and in a larger transfer experiment involving the control of 3,971 traffic signal controllers in Manhattan, we show that MuJAM, using both cyclic and acyclic constraints, outperforms domain-specific baselines as well as a recent transferable approach.
引用
收藏
页码:238 / 250
页数:13
相关论文
共 50 条
  • [41] A Graph Deep Reinforcement Learning Traffic Signal Control for Multiple Intersections Considering Missing Data
    Xu, Dongwei
    Yu, Zefeng
    Liao, Xiangwang
    Guo, Haifeng
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (12) : 18307 - 18319
  • [42] Fault Tolerant Control combining Reinforcement Learning and Model-based Control
    Bhan, Luke
    Quinones-Grueiro, Marcos
    Biswas, Gautam
    5TH CONFERENCE ON CONTROL AND FAULT-TOLERANT SYSTEMS (SYSTOL 2021), 2021, : 31 - 36
  • [43] Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control
    Ibrokhimov, Bunyodbek
    Kim, Young-Joo
    Kang, Sanggil
    SENSORS, 2022, 22 (07)
  • [44] Cognitive Control Predicts Use of Model-based Reinforcement Learning
    Otto, A. Ross
    Skatova, Anya
    Madlon-Kay, Seth
    Daw, Nathaniel D.
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (02) : 319 - 333
  • [45] Model-based hierarchical reinforcement learning and human action control
    Botvinick, Matthew
    Weinstein, Ari
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 2014, 369 (1655)
  • [46] Model-based Reinforcement Learning for Continuous Control with Posterior Sampling
    Fan, Ying
    Ming, Yifei
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [47] Advances in model-based reinforcement learning for Adaptive Optics control
    Nousiainen, Jalo
    Engler, Byron
    Kasper, Markus
    Helin, Tapio
    Heritier, Cedric T.
    Rajani, Chang
    ADAPTIVE OPTICS SYSTEMS VIII, 2022, 12185
  • [48] Adaptive optics control using model-based reinforcement learning
    Nousiainen, Jalo
    Rajani, Chang
    Kasper, Markus
    Helin, Tapio
    OPTICS EXPRESS, 2021, 29 (10) : 15327 - 15344
  • [49] A Deep Reinforcement Learning Approach to Traffic Signal Control
    Razack, Aquib Junaid
    Ajith, Vysyakh
    Gupta, Rajiv
    2021 IEEE CONFERENCE ON TECHNOLOGIES FOR SUSTAINABILITY (SUSTECH2021), 2021,
  • [50] Deep Reinforcement Learning for Traffic Signal Control: A Review
    Rasheed, Faizan
    Yau, Kok-Lim Alvin
    Noor, Rafidah Md.
    Wu, Celimuge
    Low, Yeh-Ching
    IEEE ACCESS, 2020, 8 : 208016 - 208044