Anti-Disturbance Self-Supervised Reinforcement Learning for Perturbed Car-Following System

被引:3
|
作者
Li, Meng [1 ]
Li, Zhibin [1 ]
Wang, Shunchao [1 ]
Wang, Bingtong [1 ]
机构
[1] Southeast Univ, Sch Transportat, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Anti-disturbance; car-following; self-supervised reinforcement learning; traffic oscillation; CONNECTED CRUISE CONTROL; AUTOMATED VEHICLES; CONTROL STRATEGY; PLATOON CONTROL; MODEL;
D O I
10.1109/TVT.2023.3270356
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper proposes an anti-disturbance car-following strategy for attenuating (i) exogenous disturbances from preceding traffic oscillations and (ii) endogenous disturbances in vehicular control systems (e.g., wind gust, ground friction, and rolling resistance). Firstly, it employs a modified robust controller to generate an expert car-following control experience. Subsequently, it imitates the expert behaviors via the behavioral cloning (BC) technique, thereby developing the anti-disturbance ability. Lastly, the obtained policy is optimized using the self-supervised reinforcement learning (RL) approach. The simulation experiments, comprising both training and evaluation phases, are performed via Python. To simulate car-following scenarios, we utilize the ground-truth data fromthe NextGeneration Simulation (NGSIM) datasets. Through recursive interactions with the perturbed car-following environment, self-supervised RL drives stable policy improvement. The proposed anti-disturbance self-supervised RL (ADSSRL) policy presents a smooth and almost monotonously increasing reward curve. Further evaluation of disturbance dampening performance suggests that at least a 44.5% reduction in control efficiency cost and a 10.1% reduction in driving comfort cost are achieved compared with baselines.
引用
收藏
页码:11318 / 11331
页数:14
相关论文
共 50 条
  • [1] Personalized Car-Following Control Based on a Hybrid of Reinforcement Learning and Supervised Learning
    Song, Dongjian
    Zhu, Bing
    Zhao, Jian
    Han, Jiayi
    Chen, Zhicheng
    [J]. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (06) : 6014 - 6029
  • [2] Intrinsically Motivated Self-supervised Learning in Reinforcement Learning
    Zhao, Yue
    Du, Chenzhuang
    Zhao, Hang
    Li, Tiejun
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022, : 3605 - 3615
  • [3] Self-Supervised Reinforcement Learning for Recommender Systems
    Xin, Xin
    Karatzoglou, Alexandros
    Arapakis, Ioannis
    Jose, Joemon M.
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 931 - 940
  • [4] Car-following strategy of intelligent connected vehicle using extended disturbance observer adjusted by reinforcement learning
    Yan, Ruidong
    Li, Penghui
    Gao, Hongbo
    Huang, Jin
    Wang, Chengbo
    [J]. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2024, 9 (02) : 365 - 373
  • [5] Reinforcement Learning with Attention that Works: A Self-Supervised Approach
    Manchin, Anthony
    Abbasnejad, Ehsan
    van den Hengel, Anton
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2019, PT V, 2019, 1143 : 223 - 230
  • [6] Proactive Car-Following Using Deep-Reinforcement Learning
    Yen, Yi-Tung
    Chou, Jyun-Jhe
    Shih, Chi-Sheng
    Chen, Chih-Wei
    Tsung, Pei-Kuei
    [J]. 2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [7] Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
    Shi, Wenjie
    Huang, Gao
    Song, Shiji
    Wang, Zhuoyuan
    Lin, Tingyu
    Wu, Cheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2712 - 2724
  • [8] Towards robust car-following based on deep reinforcement learning
    Hart, Fabian
    Okhrin, Ostap
    Treiber, Martin
    [J]. TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2024, 159
  • [9] Dynamic Car-following Model Calibration with Deep Reinforcement Learning
    Naing, Htet
    Cai, Wentong
    Wu, Tiantian
    Yu, Liang
    [J]. 2022 IEEE 25TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2022, : 959 - 966
  • [10] Driver Car-Following Model Based on Deep Reinforcement Learning
    Guo, Jinghua
    Li, Wenchang
    Luo, Yugong
    Chen, Tao
    Li, Keqiang
    [J]. Qiche Gongcheng/Automotive Engineering, 2021, 43 (04): : 571 - 579