Learning value functions with relational state representations for guiding task-and-motion planning

被引:0
|
作者
Kim, Beomjoon [1 ]
Shimanuki, Luke [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA
来源
CONFERENCE ON ROBOT LEARNING, VOL 100 | 2019年 / 100卷
关键词
Task and motion planning; value-function learning;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We propose a novel relational state representation and an action-value function learning algorithm that learns from planning experience for geometric task-and-motion planning (GTAMP) problems, in which the goal is to move several objects to regions in the presence of movable obstacles. The representation encodes information about which objects occlude the manipulation of other objects and is encoded using a small set of predicates. It supports efficient learning, using graph neural networks, of an action-value function that can be used to guide a GTAMP solver. Importantly, it enables learning from planning experience on simple problems and generalizing to more complex problems and even across substantially different geometric environments. We demonstrate the method in two challenging GTAMP domains.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] INTERACTIVE AND ON-LINE LEARNING SYSTEM FOR ASSEMBLY TASK MOTION PLANNING
    Yan, Yu
    Poirson, Emilie
    Bennis, Fouad
    PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, 2013, VOL 4, 2014,
  • [32] Reinforcement learning based motion planning of dynamic manipulation task for manipulator
    Eng. Training Center, Shanghai Jiaotong Univ., Shanghai 200240, China
    Xitong Fangzhen Xuebao, 2006, 9 (2537-2540):
  • [33] Learning Task-Independent Game State Representations from Unlabeled Images
    Trivedi, Chintan
    Makantasis, Konstantinos
    Liapis, Antonios
    Yannakakis, Georgios N.
    2022 IEEE CONFERENCE ON GAMES, COG, 2022, : 88 - 95
  • [34] Task state representations in vmPFC mediate relevant and irrelevant value signals and their behavioral influence
    Nir Moneta
    Mona M. Garvert
    Hauke R. Heekeren
    Nicolas W. Schuck
    Nature Communications, 14
  • [35] Task state representations in vmPFC mediate relevant and irrelevant value signals and their behavioral influence
    Moneta, Nir
    Garvert, Mona M.
    Heekeren, Hauke R.
    Schuck, Nicolas W.
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [36] Task-Motion Planning with Reinforcement Learning for Adaptable Mobile Service Robots
    Jiang, Yuqian
    Yang, Fangkai
    Zhang, Shiqi
    Stone, Peter
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7529 - 7534
  • [37] Learning Motion Planning Policies in Uncertain Environments through Repeated Task Executions
    Tsang, Florence
    Macdonald, Ryan A.
    Smith, Stephen L.
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 8 - 14
  • [38] Optimistic Reinforcement Learning-Based Skill Insertions for Task and Motion Planning
    Liu, Gaoyuan
    de Winter, Joris
    Durodie, Yuri
    Steckelmacher, Denis
    Nowe, Ann
    Vanderborght, Bram
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06): : 5974 - 5981
  • [39] Learning to Correct Mistakes: Backjumping in Long-Horizon Task and Motion Planning
    Sung, Yoonchang
    Wang, Zizhao
    Stone, Peter
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 2115 - 2124
  • [40] Learning to guide task and motion planning using score-space representation
    Kim, Beomjoon
    Wang, Zi
    Kaelbling, Leslie Pack
    Lozano-Perez, Tomas
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2019, 38 (07): : 793 - 812