Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner

被引:3
|
作者
Zang, Xiao [1 ]
Yin, Miao [1 ]
Huang, Lingyi [1 ]
Yu, Jingjin [2 ]
Zonouz, Saman [1 ]
Yuan, Bo [1 ]
机构
[1] Rutgers State Univ, Dept Elect & Comp Engn, New Brunswick, NJ 08854 USA
[2] Rutgers State Univ, Dept Comp Sci, New Brunswick, NJ USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/IROS47612.2022.9981769
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network (NN)-based methods have emerged as an attractive approach for robot motion planning due to strong learning capabilities of NN models and their inherently high parallelism. Despite the current development in this direction, the efficient capture and processing of important sequential and spatial information, in a direct and simultaneous way, is still relatively under-explored. To overcome the challenge and unlock the potentials of neural networks for motion planning tasks, in this paper, we propose STP-Net, an end-to-end learning framework that can fully extract and leverage important spatio-temporal information to form an efficient neural motion planner. By interpreting the movement of the robot as a video clip, robot motion planning is transformed to a video prediction task that can be performed by STP-Net in both spatially and temporally efficient ways. Empirical evaluations across different seen and unseen environments show that, with nearly 100% accuracy (aka, success rate), STP-Net demonstrates very promising performance with respect to both planning speed and path cost. Compared with existing NN-based motion planners, STP-Net achieves at least 5x, 2.6x and 1.8x faster speed with lower path cost on 2D Random Forest, 2D Maze and 3D Random Forest environments, respectively. Furthermore, STP-Net can quickly and simultaneously compute multiple near-optimal paths in multi-robot motion planning tasks.
引用
收藏
页码:12492 / 12499
页数:8
相关论文
共 50 条
  • [1] MOTION LEARNING USING SPATIO-TEMPORAL NEURAL NETWORK
    Yusoff, Nooraini
    Kabir-Ahmad, Farzana
    Jemili, Mohamad-Farif
    [J]. JOURNAL OF INFORMATION AND COMMUNICATION TECHNOLOGY-MALAYSIA, 2020, 19 (02): : 207 - 223
  • [2] SPATIO-TEMPORAL MOTION AGGREGATION NETWORK FOR VIDEO ACTION DETECTION
    Zhang, Hongcheng
    Zhao, Xu
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2180 - 2184
  • [3] SPATIO-TEMPORAL PREDICTION IN VIDEO CODING BY SPATIALLY REFINED MOTION COMPENSATION
    Seiler, Juergen
    Kaup, Andre
    [J]. 2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 2788 - 2791
  • [4] Spatio-Temporal Branching for Motion Prediction using Motion Increments
    Wang, Jiexin
    Zhou, Yujie
    Qiang, Wenwen
    Ba, Ying
    Su, Bing
    Wen, Ji-Rong
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4290 - 4299
  • [5] Spatial Optimization in Spatio-temporal Motion Planning
    Zhang, Weize
    Yadmellat, Peyman
    Gao, Zhiwei
    [J]. 2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 1248 - 1254
  • [6] A neural network-based approach to robot motion control
    Grasemann, Uli
    Stronger, Daniel
    Stone, Peter
    [J]. ROBOCUP 2007: ROBOT SOCCER WORLD CUP XI, 2008, 5001 : 480 - 487
  • [7] MOBILE ROBOT MOTION PLANNER VIA NEURAL NETWORK
    Krejsa, J.
    Vechet, S.
    [J]. ENGINEERING MECHANICS 2011, 2011, : 327 - 330
  • [8] Spatio-temporal compression of the motion field in video coding
    Grigoriu, L
    [J]. 2001 IEEE FOURTH WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING, 2001, : 129 - 134
  • [9] Video Quality Assessment Metric Based on Spatio-Temporal Motion Information
    Kang, Kai
    Liu, Xingang
    Sun, Chao
    [J]. 2013 IEEE 11TH INTERNATIONAL CONFERENCE ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING (DASC), 2013, : 47 - 51
  • [10] Video objects abstraction based on the spatio-temporal characteristic in motion window
    Jian, Chen
    [J]. 2007 INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS PROCEEDINGS, VOLS 1 AND 2: VOL 1: COMMUNICATION THEORY AND SYSTEMS; VOL 2: SIGNAL PROCESSING, COMPUTATIONAL INTELLIGENCE, CIRCUITS AND SYSTEMS, 2007, : 735 - 739