Predictive reinforcement learning in non-stationary environments using weighted mixture policy

被引:0
|
作者
Pourshamsaei, Hossein [1 ]
Nobakhti, Amin [1 ]
机构
[1] Sharif Univ Technol, Dept Elect Engn, Azadi Ave, Tehran 111554363, Iran
关键词
Reinforcement learning; Non-stationary environments; Adaptive learning rate; Mixture policy; Predictive reference tracking; MODEL;
D O I
10.1016/j.asoc.2024.111305
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement Learning (RL) within non-stationary environments presents a formidable challenge. In some applications, anticipating abrupt alterations in the environment model might be possible. The existing literature lacks a framework that proactively harnesses such predictions to enhance reward optimization. This paper introduces an innovative methodology designed to preemptively leverage these predictions, thereby maximizing the overall achieved performance. This is executed by formulating a novel approach that generates a weighted mixture policy from both the optimal policies of the prevailing and forthcoming models. To ensure safe learning, an adaptive learning rate is derived to facilitate training of the weighted mixture policy. This theoretically guarantees monotonic performance improvement at each update during training. Empirical trials focus on a model-free predictive reference tracking scenario involving piecewise constant references. Through the utilization of the cart-pole position control problem, it is demonstrated that the proposed algorithm surpasses prior techniques such as context Q-learning and RL with context detection algorithms in nonstationary environments. Moreover, the algorithm outperforms the application of individual optimal policies derived from each observed environment model (i.e., policies not utilizing predictions).
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Towards Reinforcement Learning for Non-stationary Environments
    Dal Toe, Sebastian Gregory
    Tiddeman, Bernard
    Mac Parthalain, Neil
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, UKCI 2023, 2024, 1453 : 41 - 52
  • [2] Reinforcement learning algorithm for non-stationary environments
    Sindhu Padakandla
    Prabuchandran K. J.
    Shalabh Bhatnagar
    Applied Intelligence, 2020, 50 : 3590 - 3606
  • [3] Reinforcement learning algorithm for non-stationary environments
    Padakandla, Sindhu
    Prabuchandran, K. J.
    Bhatnagar, Shalabh
    APPLIED INTELLIGENCE, 2020, 50 (11) : 3590 - 3606
  • [4] Reinforcement learning in episodic non-stationary Markovian environments
    Choi, SPM
    Zhang, NL
    Yeung, DY
    IC-AI '04 & MLMTA'04 , VOL 1 AND 2, PROCEEDINGS, 2004, : 752 - 758
  • [5] Adaptive deep reinforcement learning for non-stationary environments
    Jin ZHU
    Yutong WEI
    Yu KANG
    Xiaofeng JIANG
    Geir E.DULLERUD
    Science China(Information Sciences), 2022, (10) : 225 - 241
  • [6] Adaptive deep reinforcement learning for non-stationary environments
    Zhu, Jin
    Wei, Yutong
    Kang, Yu
    Jiang, Xiaofeng
    Dullerud, Geir E.
    SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (10)
  • [7] Adaptive deep reinforcement learning for non-stationary environments
    Jin Zhu
    Yutong Wei
    Yu Kang
    Xiaofeng Jiang
    Geir E. Dullerud
    Science China Information Sciences, 2022, 65
  • [8] Meta-Reinforcement Learning in Non-Stationary and Dynamic Environments
    Bing, Zhenshan
    Lerch, David
    Huang, Kai
    Knoll, Alois
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3476 - 3491
  • [9] An adaptable fuzzy reinforcement learning method for non-stationary environments
    Haighton, Rachel
    Asgharnia, Amirhossein
    Schwartz, Howard
    Givigi, Sidney
    NEUROCOMPUTING, 2024, 604
  • [10] A robust policy bootstrapping algorithm for multi-objective reinforcement learning in non-stationary environments
    Abdelfattah, Sherif
    Kasmarik, Kathryn
    Hu, Jiankun
    ADAPTIVE BEHAVIOR, 2020, 28 (04) : 273 - 292