Graph Convolution Reinforcement Learning for Decision-Making in Highway Overtaking Scenario

被引:3
|
作者
Meng Xiaoqiang [1 ]
Yang Fan [1 ]
Li Xueyuan [1 ]
Liu Qi [1 ]
Gao Xin [1 ]
Li Zirui [1 ]
机构
[1] Beijing Inst Technol, Sch Mech Engn, Beijing, Peoples R China
关键词
decision-making; deep reinforcement learning; graph neural network; autonomous vehicles; multi-agent; VEHICLE;
D O I
10.1109/ICIEA54703.2022.10006015
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Overtaking of autonomous vehicles (AVs) is an extremely complex process, which involves many factors and poses great safety hazards. However, most of the current research does not consider the impact of the dynamic environment on autonomous vehicles. In order to solve the multi-agent overtaking problem on the highway, this paper proposes a decision-making algorithm for AVs. The algorithm is based on graph neural network (GNN) and deep reinforcement learning (DRL), and adopts different training methods including as deep Q network (DQN), double DQN, dueling DQN, and D3QN for simulation. Firstly, the simulation environment is a 3-lane highway constructed in sumo. Secondly, there are both human-driven vehicles (HDVs) and AVs with maximum speeds of 10km/h and 20km/h on the highway. Finally, these two kinds of vehicles will appear in the right lane with different probabilities. The training effect is evaluated by the time it takes for the vehicle to enter and exit the current environment and the average speed of the AV. The simulation results show that the algorithm improves the efficiency of the overtaking process and reduces the accident rate.
引用
收藏
页码:417 / 422
页数:6
相关论文
共 50 条
  • [41] Intrusion Response Decision-making Method Based on Reinforcement Learning
    Yang, Jun-nan
    Zhang, Hong-qi
    Zhang, Chuan-fu
    2018 INTERNATIONAL CONFERENCE ON COMMUNICATION, NETWORK AND ARTIFICIAL INTELLIGENCE (CNAI 2018), 2018, : 154 - 162
  • [42] Research on Decision-Making in Emotional Agent Based on Reinforcement Learning
    Feng Chao
    Chen Lin
    Jiang Kui
    Wei Zhonglin
    Zhai Bing
    2016 2ND IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS (ICCC), 2016, : 1191 - 1194
  • [43] Historical Decision-Making Regularized Maximum Entropy Reinforcement Learning
    Dong, Botao
    Huang, Longyang
    Pang, Ning
    Chen, Hongtian
    Zhang, Weidong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [44] SPACECRAFT DECISION-MAKING AUTONOMY USING DEEP REINFORCEMENT LEARNING
    Harris, Andrew
    Teil, Thibaud
    Schaub, Hanspeter
    SPACEFLIGHT MECHANICS 2019, VOL 168, PTS I-IV, 2019, 168 : 1757 - 1775
  • [45] A fuzzy-inference-based reinforcement learning method of overtaking decision making for automated vehicles
    Wu, Qiong
    Cheng, Shuo
    Li, Liang
    Yang, Fan
    Meng, Li Jun
    Fan, Zhi Xian
    Liang, Hua Wei
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING, 2022, 236 (01) : 75 - 83
  • [46] Reinforcement learning applied to a situation awareness decision-making model
    Costa, Renato D.
    Hirata, Celso M.
    INFORMATION SCIENCES, 2025, 704
  • [47] Continuous Decision-Making in Lane Changing and Overtaking Maneuvers for Unmanned Vehicles: A Risk-Aware Reinforcement Learning Approach With Task Decomposition
    Wu, Sifan
    Tian, Daxin
    Duan, Xuting
    Zhou, Jianshan
    Zhao, Dezong
    Cao, Dongpu
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2024, 9 (04): : 4657 - 4674
  • [48] Highway Traffic Modeling and Decision Making for Autonomous Vehicle Using Reinforcement Learning
    You, Changxi
    Lu, Jianbo
    Filev, Dimitar
    Tsiotras, Panagiotis
    2018 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2018, : 1227 - 1232
  • [49] Improving decision-making with scenario planning
    Chermack, TJ
    FUTURES, 2004, 36 (03) : 295 - 309
  • [50] Decision-Making Based on Reinforcement Learning and Model Predictive Control Considering Space Generation for Highway On-Ramp Merging
    Kimura, Hikaru
    Takahashi, Masaki
    Nishiwaki, Kazuhiro
    Iezawa, Masahiro
    IFAC PAPERSONLINE, 2022, 55 (27): : 241 - 246