Dynamic Vehicle Traffic Control Using Deep Reinforcement Learning in Automated Material Handling System

被引:0
|
作者
Kang, Younkook [1 ]
Lyu, Sungwon [1 ]
Kim, Jeeyung [1 ]
Park, Bongjoon [1 ]
Cho, Sungzoon [1 ]
机构
[1] Seoul Natl Univ, Dept Ind Engn, Seoul, South Korea
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In automated material handling systems (AMHS), delivery time is an important issue directly associated with the production cost and the quality of the product. In this paper, we propose a dynamic routing strategy to shorten delivery time and delay. We set the target of control by analyzing traffic flows and selecting the region with the highest flow rate and congestion frequency. Then, we impose a routing cost in order to dynamically reflect the real-time changes of traffic states. Our deep reinforcement learning model consists of a Q-learning step and a recurrent neural network, through which traffic states and action values are predicted. Experiment results show that the proposed method decreases manufacturing costs while increasing productivity. Additionally, we find evidence the reinforcement learning structure proposed in this study can autonomously and dynamically adjust to the changes in traffic patterns.
引用
收藏
页码:9949 / 9950
页数:2
相关论文
共 50 条
  • [1] Dynamic vehicle allocation control for automated material handling system in semiconductor manufacturing
    Lin, James T.
    Wu, Cheng-Hung
    Huang, Chih-Wei
    [J]. COMPUTERS & OPERATIONS RESEARCH, 2013, 40 (10) : 2329 - 2339
  • [2] The improvement of automated material handling system traffic control
    Wang, J
    Liu, J
    Huang, C
    Wu, CT
    Chueh, C
    [J]. 2002 SEMICONDUCTOR MANUFACTURING TECHNOLOGY WORKSHOP, 2002, : 271 - 274
  • [3] Connected automated vehicle cooperative control with a deep reinforcement learning approach in a mixed traffic environment
    Shi, Haotian
    Zhou, Yang
    Wu, Keshu
    Wang, Xin
    Lin, Yangxin
    Ran, Bin
    [J]. Transportation Research Part C: Emerging Technologies, 2021, 133
  • [4] Connected automated vehicle cooperative control with a deep reinforcement learning approach in a mixed traffic environment
    Shi, Haotian
    Zhou, Yang
    Wu, Keshu
    Wang, Xin
    Lin, Yangxin
    Ran, Bin
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2021, 133
  • [5] Vehicle emission control on road with temporal traffic information using deep reinforcement learning
    Xu, Zhenyi
    Cao, Yang
    Kang, Yu
    Zhao, Zhenyi
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 14960 - 14965
  • [6] Automated material handling system traffic control by means of node balancing
    Bahri, N
    Gaskins, RJ
    [J]. PROCEEDINGS OF THE 2000 WINTER SIMULATION CONFERENCE, VOLS 1 AND 2, 2000, : 1344 - 1346
  • [7] Constrained Reinforcement Learning for Dynamic Material Handling
    Hu, Chengpeng
    Wang, Ziming
    Liu, Jialin
    Wen, Junyi
    Mao, Bifei
    Yao, Xin
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Socially Intelligent Reinforcement Learning for Optimal Automated Vehicle Control in Traffic Scenarios
    Taghavifar, Hamid
    Wei, Chongfeng
    Taghavifar, Leyla
    [J]. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2024,
  • [9] Reentrant hybrid flow shop scheduling with stockers in automated material handling systems using deep reinforcement learning
    Lin, Chun-Cheng
    Peng, Yi-Chun
    Chang, Yung-Sheng
    Chang, Chun-Hsiang
    [J]. COMPUTERS & INDUSTRIAL ENGINEERING, 2024, 189
  • [10] Dynamic metasurface control using Deep Reinforcement Learning
    Zhao, Ying
    Li, Liang
    Lanteri, Stephane
    Viquerat, Jonathan
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2022, 197 : 377 - 395