A deep learning framework for realistic robot motion generation

被引:13
|
作者
Dong, Ran [1 ]
Chang, Qiong [2 ]
Ikuno, Soichiro [1 ]
机构
[1] Tokyo Univ Technol, Sch Comp Sci, Tokyo 1920982, Japan
[2] Tokyo Inst Technol, Sch Comp, Tokyo 1528550, Japan
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 32期
关键词
Realistic motion generation; Convolutional autoencoder; Multivariate empirical mode decomposition; Human in the loop; SPECTRUM;
D O I
10.1007/s00521-021-06192-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Humanoid robots are being developed to play the role of personal assistants. With the development of artificial intelligence technology, humanoid robots are expected to perform many human tasks, such as housework, human care, and even medical treatment. However, robots cannot currently move flexibly like humans, which affects their fine motor skill performance. This is primarily because traditional robot control methods use manipulators that are difficult to articulate well. To solve this problem, we propose a nonlinear realistic robot motion generation method based on deep learning. Our method benefits from decomposing human motions into basic motions and realistic motions using the multivariate empirical mode decomposition and learning the biomechanical relationships between them by using an autoencoder generation network. The experimental results show that realistic motion features can be learned by the generation network and motion realism can be increased by adding the learned motions to the robots.
引用
收藏
页码:23343 / 23356
页数:14
相关论文
共 50 条
  • [1] A deep learning framework for realistic robot motion generation
    Ran Dong
    Qiong Chang
    Soichiro Ikuno
    [J]. Neural Computing and Applications, 2023, 35 : 23343 - 23356
  • [2] Realistic Respiratory Motion Simulation Using Deep Learning
    Lee, D.
    Nadeem, S.
    Hu, Y.
    [J]. MEDICAL PHYSICS, 2022, 49 (06) : E526 - E526
  • [3] A General Framework of Motion Planning for Redundant Robot Manipulator Based on Deep Reinforcement Learning
    Li, Xiangjian
    Liu, Huashan
    Dong, Menghua
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) : 5253 - 5263
  • [4] A Hierarchical Robot Learning Framework for Manipulator Reactive Motion Generation via Multi-Agent Reinforcement Learning and Riemannian Motion Policies
    Wang, Yuliu
    Sagawa, Ryusuke
    Yoshiyasu, Yusuke
    [J]. IEEE ACCESS, 2023, 11 : 126979 - 126994
  • [5] Dynamic Motion Generation by Flexible-Joint Robot based on Deep Learning using Images
    Wu, Yuheng
    Takahashi, Kuniyuki
    Yamada, Hiroki
    Kim, Kitae
    Murata, Shingo
    Sugano, Shigeki
    Ogata, Tetsuya
    [J]. 2018 JOINT IEEE 8TH INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2018, : 169 - 174
  • [6] Walking Motion Generation of Bipedal Robot Based on Planar Covariation Using Deep Reinforcement Learning
    Yamano, Junsei
    Kurokawa, Masaki
    Sakai, Yuki
    Hashimoto, Kenji
    [J]. SYNERGETIC COOPERATION BETWEEN ROBOTS AND HUMANS, VOL 1, CLAWAR 2023, 2024, 810 : 217 - 228
  • [7] Motion Generation in the MRROC plus plus Robot Programming Framework
    Zielinski, Cezary
    Winiarski, Tomasz
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (04): : 386 - 413
  • [8] A Deep Reinforcement Learning Framework for Column Generation
    Chi, Cheng
    Aboussalah, Amine Mohamed
    Khalil, Elias B.
    Wang, Juyoung
    Sherkat-Masoumi, Zoha
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] A Deep Learning Framework for Character Motion Synthesis and Editing
    Holden, Daniel
    Saito, Jun
    Komura, Taku
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2016, 35 (04):
  • [10] VULGEN: Realistic Vulnerability Generation Via Pattern Mining and Deep Learning
    Nong, Yu
    Ou, Yuzhe
    Pradel, Michael
    Chen, Feng
    Cai, Haipeng
    [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 2527 - 2539