Safe Reinforcement Learning for an Energy-Efficient Driver Assistance System

被引:2
|
作者
Hailemichael, Habtamu [1 ]
Ayalew, Beshah [1 ]
Kerbel, Lindsey [1 ]
Ivanco, Andrej [2 ]
Loiselle, Keith [2 ]
机构
[1] Clemson Univ, Automot Engn, Greenville, SC 29607 USA
[2] Allison Transmiss Inc, One Allison Way, Indianapolis, IN 46222 USA
来源
IFAC PAPERSONLINE | 2022年 / 55卷 / 37期
关键词
RL driver-assist; Safe reinforcement learning; Safety filtering; Control barrier functions;
D O I
10.1016/j.ifacol.2022.11.250
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning (RL)-based driver assistance systems seek to improve fuel consumption via continual improvement of powertrain control actions considering experiential data from the field. However, the need to explore diverse experiences in order to learn optimal policies often limits the application of RL techniques in safety-critical systems like vehicle control. In this paper, an exponential control barrier function (ECBF) is derived and utilized to filter unsafe actions proposed by an RL-based driver assistance system. The RL agent freely explores and optimizes the performance objectives while unsafe actions are projected to the closest actions in the safe domain. The reward is structured so that driver's acceleration requests are met in a manner that boosts fuel economy and doesn't compromise comfort. The optimal gear and traction torque control actions that maximize the cumulative reward are computed via the Maximum a Posteriori Policy Optimization (MPO) algorithm configured for a hybrid action space. The proposed safe-RL scheme is trained and evaluated in car following scenarios where it is shown that it effectively avoids collision both during training and evaluation while delivering on the expected fuel economy improvements for the driver assistance system. Copyright (c) 2022 The Authors. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
引用
收藏
页码:615 / 620
页数:6
相关论文
共 50 条
  • [21] Hierarchical Reinforcement Learning for RIS-Assisted Energy-Efficient RAN
    Zhou, Hao
    Kong, Long
    Elsayed, Medhat
    Bavand, Majid
    Gaigalas, Raimundas
    Furr, Steve
    Erol-Kantarci, Melike
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3326 - 3331
  • [22] Energy-Efficient Ultra-Dense Network With Deep Reinforcement Learning
    Ju, Hyungyu
    Kim, Seungnyun
    Kim, Youngjoon
    Shim, Byonghyo
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (08) : 6539 - 6552
  • [23] Reinforcement Learning for Energy-efficient Edge Caching in Mobile Edge Networks
    Zheng, Hantong
    Zhou, Huan
    Wang, Ning
    Chen, Peng
    Xu, Shouzhi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [24] Energy-Efficient Resource Allocation with Dynamic Cache Using Reinforcement Learning
    Hu, Zeyu
    Li, Zexu
    Li, Yong
    2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [25] Energy-efficient Clock-Synchronization in IoT Using Reinforcement Learning
    Assylbek, Damir
    Nadirkhanova, Aizhuldyz
    Zorbas, Dimitrios
    2024 20TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING IN SMART SYSTEMS AND THE INTERNET OF THINGS, DCOSS-IOT 2024, 2024, : 244 - 248
  • [26] Energy-Efficient Solution Based on Reinforcement Learning Approach in Fog Networks
    Mebrek, Adila
    Esseghir, Moez
    Merghem-Boulahia, Leila
    2019 15TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE (IWCMC), 2019, : 2019 - 2024
  • [27] Energy-efficient heating control for smart buildings with deep reinforcement learning
    Gupta, Anchal
    Badr, Youakim
    Negahban, Ashkan
    Qiu, Robin G.
    JOURNAL OF BUILDING ENGINEERING, 2021, 34
  • [28] Balancing tradeoffs for energy-efficient routing in MANETs based on reinforcement learning
    Naruephiphat, W.
    Usaha, W.
    2008 IEEE 67TH VEHICULAR TECHNOLOGY CONFERENCE-SPRING, VOLS 1-7, 2008, : 2361 - 2365
  • [29] Deep Reinforcement Learning for Energy-efficient Train Operation of Automatic Driving
    Meng, Xianglin
    Wang, He
    Lin, Mu
    Zhou, Yonghua
    2020 IEEE 8TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT), 2020, : 123 - 126
  • [30] CooperativeQ: Energy-Efficient Channel Access Based on Cooperative Reinforcement Learning
    Emre, Mehmet
    Gur, Gurkan
    Bayhan, Suzan
    Alagoz, Fatih
    2015 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATION WORKSHOP (ICCW), 2015, : 2799 - 2805