Intelligent Traffic Signal Phase Distribution System Using Deep Q-Network

被引:9
|
作者
Joo, Hyunjin [1 ]
Lim, Yujin [1 ]
机构
[1] Sookmyung Womens Univ, Dept IT Engn, Seoul 04310, South Korea
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 01期
基金
新加坡国家研究基金会;
关键词
intelligent traffic signal control; reinforcement learning; deep Q-network; multi-intersection; throughput;
D O I
10.3390/app12010425
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Traffic congestion is a worsening problem owing to an increase in traffic volume. Traffic congestion increases the driving time and wastes fuel, generating large amounts of fumes and accelerating environmental pollution. Therefore, traffic congestion is an important problem that needs to be addressed. Smart transportation systems manage various traffic problems by utilizing the infrastructure and networks available in smart cities. The traffic signal control system used in smart transportation analyzes and controls traffic flow in real time. Thus, traffic congestion can be effectively alleviated. We conducted preliminary experiments to analyze the effects of throughput, queue length, and waiting time on the system performance according to the signal allocation techniques. Based on the results of the preliminary experiment, the standard deviation of the queue length is interpreted as an important factor in an order allocation technique. A smart traffic signal control system using a deep Q-network, which is a type of reinforcement learning, is proposed. The proposed algorithm determines the optimal order of a green signal. The goal of the proposed algorithm is to maximize the throughput and efficiently distribute the signals by considering the throughput and standard deviation of the queue length as reward parameters.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Twice Sampling Method in Deep Q-network
    Zhao Y.-N.
    Liu P.
    Zhao W.
    Tang X.-L.
    Zidonghua Xuebao/Acta Automatica Sinica, 2019, 45 (10): : 1870 - 1882
  • [32] Dynamic fusion for ensemble of deep Q-network
    Chan, Patrick P. K.
    Xiao, Meng
    Qin, Xinran
    Kees, Natasha
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (04) : 1031 - 1040
  • [33] Trax Solver on Zynq with Deep Q-Network
    Sugimoto, Naru
    Mitsuishi, Takuji
    Kaneda, Takahiro
    Tsuruta, Chiharu
    Sakai, Ryotaro
    Shimura, Hideki
    Amano, Hideharu
    2015 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY (FPT), 2015, : 272 - 275
  • [34] Application of Deep Q-Network in Portfolio Management
    Gao, Ziming
    Gao, Yuan
    Hu, Yi
    Jiang, Zhengyong
    Su, Jionglong
    2020 5TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS (IEEE ICBDA 2020), 2020, : 268 - 275
  • [35] Proposal of a Deep Q-network with Profit Sharing
    Miyazaki, Kazuteru
    8TH ANNUAL INTERNATIONAL CONFERENCE ON BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, BICA 2017 (EIGHTH ANNUAL MEETING OF THE BICA SOCIETY), 2018, 123 : 302 - 307
  • [36] Social Attentive Deep Q-network for Recommendation
    Lei, Yu
    Wang, Zhitao
    Li, Wenjie
    Pei, Hongbin
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 1189 - 1192
  • [37] Averaged Weighted Double Deep Q-Network
    Wu, Jinjin
    Liu, Quan
    Chen, Song
    Yan, Yan
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (03): : 576 - 589
  • [38] Timeslot Scheduling with Reinforcement Learning Using a Double Deep Q-Network
    Ryu, Jihye
    Kwon, Juhyeok
    Ryoo, Jeong-Dong
    Cheung, Taesik
    Joung, Jinoo
    ELECTRONICS, 2023, 12 (04)
  • [39] Microgrid energy management using deep Q-network reinforcement learning
    Alabdullah, Mohammed H.
    Abido, Mohammad A.
    ALEXANDRIA ENGINEERING JOURNAL, 2022, 61 (11) : 9069 - 9078
  • [40] supDQN: Supervised Rewarding Strategy Driven Deep Q-Network for sEMG Signal Decontamination
    Jena, Ashutosh
    Gehlot, Naveen
    Kumar, Rajesh
    Vijayvargiya, Ankit
    Bukya, Mahipal
    IEEE ACCESS, 2024, 12 : 74185 - 74196