Environment-adaptive service strategy generation method for robot

被引:0
|
作者
Tian G. [1 ]
Chen H. [1 ]
Zhang M. [1 ]
Cui Y. [1 ]
机构
[1] School of Control Science and Engineering, Shandong University, Jinan
关键词
deep learning; environmental-adaptive; keyword sequence; robotic service strategy; text generation;
D O I
10.13245/j.hust.228784
中图分类号
学科分类号
摘要
In order to improve the service task execution ability of robots in different home environments,an environment-adaptive service strategy generation method was proposed,which could generate the service strategy based on the current environmental goods information. Firstly,term frequency-inverse document frequency (TF-IDF) algorithm was used to construct service instruction set,keyword sequence set and service strategy data set.Secondly,semantic parsing and block analysis were carried out for irregular natural language instructions,which were decomposed and mapped to structured service instructions to simplify the semantic space and obtain the corresponding keyword sequence to be selected. Finally,the Protégé ontology knowledge base containing the current family environment information was matched and inferred to obtain the service keyword sequence,and the GPT-2 model fine-tuned by the service strategy data set was guided to generate the adaptive service strategy.Experimental results show that this method can improve the accuracy of service strategy generation,and the final generated strategy is more feasible in a specific family environment. © 2023 Huazhong University of Science and Technology. All rights reserved.
引用
收藏
页码:102 / 108
页数:6
相关论文
共 19 条
  • [11] THENMOZHI D,, SESHATHIRI R, REVANTH K, Robotic simulation using natural language commands [C], Proc of the 2017 International Conference on Computer,Communication and Signal Processing, pp. 1-4, (2017)
  • [12] 41, 5, pp. 1227-1235
  • [13] SUTSKEVER I,, VINYALS O, LE Q., Sequence to sequence learning with neural networks[C], Proc of the 27th International Conference on Neural Information Processing Systems, pp. 3104-3112, (2014)
  • [14] PETERS M E,, NEUMANN K,, IYYER M, Deep contextualized word representations[C], Proc of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 2227-2237, (2018)
  • [15] VASWANI A, SHAZEER N, PARMAR N, Attention is all you need[C], Proc of the 31st International Conference on Neural Information Processing Systems, pp. 6000-6010, (2017)
  • [16] DEVLIN J,, CHANG M W,, LEE K, BERT:pre-training of deep bidirectional transformers for language understanding[C], Proc of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171-4186, (2019)
  • [17] RADFORD A, NARASIMHAN K, SALIMANS T, Improving language understanding by generative pre-training[J/OL]
  • [18] RADFOED A,, WU J,, CHILD R, Language models are unsupervised multitask learners[J/OL]
  • [19] (2020)