Spatiotemporal Costmap Inference for MPC Via Deep Inverse Reinforcement Learning

被引:15
|
作者
Lee, Keuntaek [1 ]
Isele, David [2 ]
Theodorou, Evangelos A. [3 ]
Bae, Sangjae [2 ]
机构
[1] Georgia Inst Technol, Dept Elect & Comp Engn, Atlanta, GA 30318 USA
[2] Honda Res Inst USA Inc, Div Res, San Jose, CA 95110 USA
[3] Georgia Inst Technol, Sch Aerosp Engn, Atlanta, GA 30318 USA
关键词
Learning from demonstration; reinforcement learning; optimization and optimal control; motion and path planning; autonomous vehicle navigation;
D O I
10.1109/LRA.2022.3146635
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
It can he difficult to autonomously produce driver behavior so that it appears natural to other traffic participants. Through Inverse Reinforcement Learning (IRL), we can automate this process by learning the underlying reward function from human demonstrations. We propose a new IRL algorithm that learns a goal-conditioned spatio-temporal reward function. The resulting costmap is used by Model Predictive Controllers (MPCs) to perform a task without any hand-designing or hand-tuning of the cost function. We evaluate our proposed Goal-conditioned SpatioTemporal Zeroing Maximum Entropy Deep IRL (GSTZ)-MEDIRL framework together with MPC in the CARLA simulator for autonomous driving, lane keeping, and lane changing tasks in a challenging dense traffic highway scenario. Our proposed methods show higher success rates compared to other baseline methods including behavior cloning, state-of-the-art RL policies, and MPC with a learning-based behavior prediction model.
引用
收藏
页码:3194 / 3201
页数:8
相关论文
共 50 条
  • [41] Improving Spatiotemporal Self-supervision by Deep Reinforcement Learning
    Buechler, Uta
    Brattoli, Biagio
    Ommer, Bjoern
    COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 797 - 814
  • [42] Learning How Pedestrians Navigate: A Deep Inverse Reinforcement Learning Approach
    Fahad, Muhammad
    Chen, Zhuo
    Guo, Yi
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 819 - 826
  • [43] Learning Battles in ViZDoom via Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Li, Nannan
    Zhu, Yuanheng
    PROCEEDINGS OF THE 2018 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG'18), 2018, : 389 - 392
  • [44] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403
  • [45] Understanding Sequential Decisions via Inverse Reinforcement Learning
    Liu, Siyuan
    Araujo, Miguel
    Brunskill, Emma
    Rossetti, Rosaldo
    Barros, Joao
    Krishnan, Ramayya
    2013 IEEE 14TH INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2013), VOL 1, 2013, : 177 - 186
  • [46] Bayesian Deep Learning via Subnetwork Inference
    Daxberger, Erik
    Nalisnick, Eric
    Allingham, James Urquhart
    Antoran, Javier
    Hernandez-Lobato, Jose Miguel
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [47] Learning to Drive via Apprenticeship Learning and Deep Reinforcement Learning
    Huang, Wenhui
    Braghin, Francesco
    Wang, Zhuo
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1536 - 1540
  • [48] DEEPMPC: A MIXTURE ABR APPROACH VIA DEEP LEARNING AND MPC
    Huang, Tianchi
    Sun, Lifeng
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1231 - 1235
  • [49] Hypernetwork Dismantling via Deep Reinforcement Learning
    Yan, Dengcheng
    Xie, Wenxin
    Zhang, Yiwen
    He, Qiang
    Yang, Yun
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2022, 9 (05): : 3302 - 3315
  • [50] Shared Autonomy via Deep Reinforcement Learning
    Reddy, Siddharth
    Dragan, Anca D.
    Levine, Sergey
    ROBOTICS: SCIENCE AND SYSTEMS XIV, 2018,