Robot imitation learning method based on structural grammar

被引:0
|
作者
Cong M. [1 ]
Jian J. [1 ]
Zou Q. [1 ,2 ]
Liu D. [1 ,2 ]
机构
[1] School of Mechanical Engineering, Dalian University of Technology, Dalian
[2] Dalian University of Technology Jiangsu Research Institute Co.Ltd., Changzhou
关键词
Imitation learning; Minimum description length principle; Probabilistic context-free grammar; Robot; Structural grammar;
D O I
10.13245/j.hust.211016
中图分类号
学科分类号
摘要
Aim at the problems of weak generalization of robot imitation learning method and high accuracy requirement of low level detector, an imitation learning method was proposed based on structural grammar. The method extracted the symbol description of the scene through the vision sensor, and then formed the symbol primitive sequence containing the noise. The probabilistic context-free grammar (PCFG) was used to characterize and manipulate these sequences, and the grammar space was formed. The minimum description length (MDL) criteria was used to evaluate the quality of the grammar in the grammar space, and the improved Beam Search algorithm was used to find the optimal grammar, which was the general structure of the demonstration activity. The general structure obtained could parse the symbol primitive sequence containing noise, and obtain the correct sequence. The excellent data expression performance and good anti-interference performance of this method was verified by comparing experiment effect of data synthesis experiment and Hanoi Tower experiment, and the parsing success rate of this method in high noise environment is about 90%. © 2021, Editorial Board of Journal of Huazhong University of Science and Technology. All right reserved.
引用
收藏
页码:97 / 102
页数:5
相关论文
共 15 条
  • [1] CALINON S, EVRARD P, GRIBOVSKAYA E, Et al., Learning collaborative manipulation tasks by demonstration using a haptic interface, Proc of International Conference on Advanced Robotics, pp. 837-842, (2009)
  • [2] (2017)
  • [3] LIU D, LU B, CONG M., Robot skill learning based on interacting with RGB-D image, Proc of 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems, pp. 1204-1209, (2019)
  • [4] (2018)
  • [5] (2015)
  • [6] GU Y, SHENG W, CRICK C, Et al., Automated assembly skill acquisition and implementation through human demonstration, Robotics and Autonomous Systems, 99, pp. 1-16, (2018)
  • [7] LIOUTIKOV R, MAEDA G, VEIGA F, Et al., Inducing probabilistic context-free grammars for the sequencing of movement primitives, Proc of 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 5651-5658, (2018)
  • [8] IVANOV Y A, BOBICK A F., Recognition of visual activities and interactions by stochastic parsing, IEEE Transactions on Pattern Analysis & Machine Intelligence, 22, 8, pp. 852-872, (2000)
  • [9] WANG T S, SHUM H Y, XU Y Q, Et al., Unsupervised analysis of human gestures, Proc of Pacific-Rim Conference on Multimedia, pp. 174-181, (2001)
  • [10] KITANI K M, SATO Y, SUGIMOTO A., Recovering the basic structure of human activities from a video-based symbol string, International Journal of Pattern Recognition & Artificial Intelligence, 22, 8, pp. 1621-1646, (2007)