Policy Iteration Based Approximate Dynamic Programming Toward Autonomous Driving in Constrained Dynamic Environment

被引:16
|
作者
Lin, Ziyu [1 ]
Ma, Jun [2 ,3 ]
Duan, Jingliang [4 ]
Li, Shengbo Eben [1 ]
Ma, Haitong [1 ]
Cheng, Bo [1 ]
Lee, Tong Heng [5 ]
机构
[1] Tsinghua Univ, Sch Vehicle & Mobil, Beijing 100084, Peoples R China
[2] Hong Kong Univ Sci & Technol Guangzhou, Robot & Autonomous Syst Thrust, Guangzhou, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[4] Univ Sci & Technol Beijing, Sch Mech Engn, Beijing 100083, Peoples R China
[5] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
基金
国家重点研发计划;
关键词
Planning; Autonomous vehicles; Vehicle dynamics; Task analysis; Heuristic algorithms; Approximation algorithms; Roads; Autonomous driving; approximate dynamic programming; motion planning; constrained optimization; reinforcement learning; VEHICLE;
D O I
10.1109/TITS.2023.3237568
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
In the area of autonomous driving, it typically brings great difficulty in solving the motion planning problem since the vehicle model is nonlinear and the driving scenarios are complex. Particularly, most of the existing methods cannot be generalized to dynamically changing scenarios with varying surrounding vehicles. To address this problem, this development here investigates the framework of integrated decision and control. As part of the modules, static path planning determines the reference candidates ahead, and then the optimal path-tracking controller realizes the specific autonomous driving task. An innovative and effective constrained finite-horizon approximate dynamic programming (ADP) algorithm is herein presented to generate the desired control policy for effective path tracking. With the generalized policy neural network that maps from the state to the control input, the proposed algorithm preserves the high effectiveness for the motion planning problem towards changing driving environments with varying surrounding vehicles. Moreover, the algorithm attains the noteworthy advantage of alleviating the typically heavy computational loads with the mode of offline training and online execution. As a result of the utilization of multi-layer neural networks in conjunction with the actor-critic framework, the constrained ADP method is capable of handling complex and multidimensional scenarios. Finally, various simulations have been carried out to show that the constrained ADP algorithm is effective.
引用
收藏
页码:5003 / 5013
页数:11
相关论文
共 50 条
  • [31] Continuous-Time Stochastic Policy Iteration of Adaptive Dynamic Programming
    Wei, Qinglai
    Zhou, Tianmin
    Lu, Jingwei
    Liu, Yu
    Su, Shuai
    Xiao, Jun
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (10): : 6375 - 6387
  • [32] Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
    Bertsekas, Dimitri P.
    Yu, Huizhen
    MATHEMATICS OF OPERATIONS RESEARCH, 2012, 37 (01) : 66 - 94
  • [33] Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
    Bertsekas, Dimitri P.
    Yu, Huizhen
    49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 1409 - 1416
  • [34] Modified policy iteration algorithms are not strongly polynomial for discounted dynamic programming
    Feinberg, Eugene A.
    Huang, Jefferson
    Scherrer, Bruno
    OPERATIONS RESEARCH LETTERS, 2014, 42 (6-7) : 429 - 431
  • [35] Single Network Approximate Dynamic Programming based Constrained Optimal Controller for Nonlinear Systems with Uncertainties
    Ding, Jie
    Balakrishnan, S. N.
    49TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2010, : 3054 - 3059
  • [36] A single network approximate dynamic programming based constrained optimal controller for nonlinear systems with uncertainties
    Department of Mech. and Aerospace Engg., Missouri Univ. of Science and Tech., Rolla, MO 65401, United States
    Proc IEEE Conf Decis Control, (3054-3059):
  • [37] An Architecture for Dynamic Context Recognition in an Autonomous Driving Testing Environment
    Eryilmaz, Elif
    Trollmann, Frank
    Albayrak, Sahin
    2018 IEEE 11TH CONFERENCE ON SERVICE-ORIENTED COMPUTING AND APPLICATIONS (SOCA), 2018, : 9 - 16
  • [38] Approximate dynamic programming with policy-based exploration for microgrid dispatch under uncertainties
    Das, Avijit
    Wu, Di
    Ni, Zhen
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2022, 142
  • [39] Perspectives of approximate dynamic programming
    Powell, Warren B.
    ANNALS OF OPERATIONS RESEARCH, 2016, 241 (1-2) : 319 - 356
  • [40] A Survey of Approximate Dynamic Programming
    Wang Lin
    Peng Hui
    Zhu Hua-yong
    Shen Lin-cheng
    2009 INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS, VOL 2, PROCEEDINGS, 2009, : 396 - 399