Hierarchical Task and Motion Planning through Deep Reinforcement Learning

被引:3
|
作者
Newaz, Abdullah Al Redwan [1 ]
Alam, Tauhidul [2 ]
机构
[1] North Carolina A&T State Univ, Dept Elect & Comp Engn, Greensboro, NC 27411 USA
[2] Louisiana State Univ, Dept Comp Sci, Shreveport, LA 71115 USA
关键词
D O I
10.1109/IRC52146.2021.00023
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Task and motion planning (TAMP) integrates the generation of high-level tasks in a discrete space and the execution of low-level actions in a continuous space. Such planning integration is susceptible to uncertainties and computationally challenging as low-level actions should be verified for high-level tasks. Therefore, this paper presents a hierarchical task and motion planning method under uncertainties. We utilize Markov Decision Processes (MDPs) to model task and motion planning in a stochastic environment. The motion planner handles motion uncertainty and leverages physical constraints to synthesize an optimal low-level control policy for a single robot to generate motions in continuous action and state spaces. Given the optimal control policy for multiple homogeneous robots, the task planner synthesizes an optimal high-level tasking policy in discrete task and state spaces addressing both task and motion uncertainties. Our optimal tasking and control policies are synthesized through deep reinforcement learning algorithms. The performance of our method is validated in realistic physics-based simulations with two quadrotors transporting objects in a warehouse setting.
引用
收藏
页码:100 / 105
页数:6
相关论文
共 50 条
  • [1] Hierarchical Free Gait Motion Planning for Hexapod Robots Using Deep Reinforcement Learning
    Wang, Xinpeng
    Fu, Huiqiao
    Deng, Guizhou
    Liu, Canghai
    Tang, Kaiqiang
    Chen, Chunlin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (11) : 10901 - 10912
  • [2] Hierarchical Task Decomposition through Symbiosis in Reinforcement Learning
    Doucette, John A.
    Lichodzijewski, Peter
    Heywood, Malcolm I.
    PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2012, : 97 - 104
  • [3] Deep Reinforcement Learning for Task Planning of Virtual Characters
    Souza, Caio
    Velhor, Luiz
    INTELLIGENT COMPUTING, VOL 2, 2021, 284 : 694 - 711
  • [4] Task Planning in "Block World" with Deep Reinforcement Learning
    Ayunts, Edward
    Panov, Alekasndr I.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES (BICA) FOR YOUNG SCIENTISTS, 2018, 636 : 3 - 9
  • [5] Emergence of Direction Selectivity and Motion Strength in Dot Motion Task Through Deep Reinforcement Learning Networks
    Fernandes, Dolton
    Kaushik, Pramod
    Bapi, Raju Surampudi
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Socially Aware Motion Planning with Deep Reinforcement Learning
    Chen, Yu Fan
    Everett, Michael
    Liu, Miao
    How, Jonathan P.
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 1343 - 1350
  • [7] Deep reinforcement learning for motion planning of mobile robots
    Sun H.-H.
    Hu C.-H.
    Zhang J.-G.
    Kongzhi yu Juece/Control and Decision, 2021, 36 (06): : 1281 - 1292
  • [8] Hierarchical Reinforcement Learning Approach for Motion Planning in Mobile Robotics
    Buitrago-Martinez, Andrea
    De la Rosa R, Fernando
    Lozano-Martinez, Fernando
    2013 IEEE LATIN AMERICAN ROBOTICS SYMPOSIUM (LARS 2013), 2013, : 83 - 88
  • [9] Simultaneous task and energy planning using deep reinforcement learning
    Wang, Di
    Hu, Mengqi
    Weir, Jeffery D.
    INFORMATION SCIENCES, 2022, 607 : 931 - 946
  • [10] Hierarchical Multicontact Motion Planning of Hexapod Robots With Incremental Reinforcement Learning
    Tang, Kaiqiang
    Fu, Huiqiao
    Deng, Guizhou
    Wang, Xinpeng
    Chen, Chunlin
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (04) : 1327 - 1341