Temporal-Logic-Based Reward Shaping for Continuing Reinforcement Learning Tasks

被引:0
|
作者
Jiang, Yuqian [1 ]
Bharadwaj, Suda [2 ]
Wu, Bo [2 ]
Shah, Rishi [1 ,3 ]
Topcu, Ufuk [2 ]
Stone, Peter [1 ,4 ]
机构
[1] Univ Texas Austin, Dept Comp Sci, Austin, TX 78712 USA
[2] Univ Texas Austin, Dept Aerosp Engn & Engn Mech, Austin, TX 78712 USA
[3] Amazon, Seattle, WA USA
[4] Sony AI, Tokyo, Japan
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In continuing tasks, average-reward reinforcement learning may be a more appropriate problem formulation than the more common discounted reward formulation. As usual, learning an optimal policy in this setting typically requires a large amount of training experiences. Reward shaping is a common approach for incorporating domain knowledge into reinforcement learning in order to speed up convergence to an optimal policy. However, to the best of our knowledge, the theoretical properties of reward shaping have thus far only been established in the discounted setting. This paper presents the first reward shaping framework for average-reward learning and proves that, under standard assumptions, the optimal policy under the original reward function can be recovered. In order to avoid the need for manual construction of the shaping function, we introduce a method for utilizing domain knowledge expressed as a temporal logic formula. The formula is automatically translated to a shaping function that provides additional reward throughout the learning process. We evaluate the proposed method on three continuing tasks. In all cases, shaping speeds up the average-reward learning rate without any reduction in the performance of the learned policy compared to relevant baselines.
引用
收藏
页码:7995 / 8003
页数:9
相关论文
共 50 条
  • [1] Distributed Control using Reinforcement Learning with Temporal-Logic-Based Reward Shaping
    Zhang, Ningyuan
    Liu, Wenliang
    Belta, Calin
    [J]. LEARNING FOR DYNAMICS AND CONTROL CONFERENCE, VOL 168, 2022, 168
  • [2] Funnel-Based Reward Shaping for Signal Temporal Logic Tasks in Reinforcement Learning
    Saxena, Naman
    Gorantla, Sandeep
    Jagtap, Pushpak
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (02) : 1373 - 1379
  • [3] Reward Shaping Based Federated Reinforcement Learning
    Hu, Yiqiu
    Hua, Yun
    Liu, Wenyan
    Zhu, Jun
    [J]. IEEE ACCESS, 2021, 9 : 67259 - 67267
  • [4] Lifelong reinforcement learning with temporal logic formulas and reward machines
    Zheng, Xuejing
    Yu, Chao
    Zhang, Minjie
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 257
  • [5] Plan-based Reward Shaping for Reinforcement Learning
    Grzes, Marek
    Kudenko, Daniel
    [J]. 2008 4TH INTERNATIONAL IEEE CONFERENCE INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, : 416 - 423
  • [6] Potential Based Reward Shaping for Hierarchical Reinforcement Learning
    Gao, Yang
    Toni, Francesca
    [J]. PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3504 - 3510
  • [7] Belief Reward Shaping in Reinforcement Learning
    Marom, Ofir
    Rosman, Benjamin
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3762 - 3769
  • [8] Reward Shaping in Episodic Reinforcement Learning
    Grzes, Marek
    [J]. AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2017, : 565 - 573
  • [9] Multigrid Reinforcement Learning with Reward Shaping
    Grzes, Marek
    Kudenko, Daniel
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2008, PT I, 2008, 5163 : 357 - 366
  • [10] Reward shaping in multiagent reinforcement learning for self-organizing systems in assembly tasks
    Huang, Bingling
    Jin, Yan
    [J]. ADVANCED ENGINEERING INFORMATICS, 2022, 54