Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances

被引:94
|
作者
Nakaoka, Shin'ichiro
Nakazawa, Atsushi
Kanehiro, Fumio
Kaneko, Kenji
Morisawa, Mitsuharu
Hirukawa, Hirohisa
Ikeuchi, Katsushi
机构
[1] Univ Tokyo, Inst Ind Sci, Meguro Ku, Tokyo 1538505, Japan
[2] Osaka Univ, Cybermedia Ctr, Osaka 5600043, Japan
[3] Natl Inst Adv Ind Sci & Technol, Intelligent Syst Res Inst, Tsukuba, Ibaraki 3058568, Japan
来源
关键词
learning from observation; imitation; biped humanoid robot; motion capture; entertainment robotics;
D O I
10.1177/0278364907079430
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper proposes a framework that achieves the Learning from Observation paradigm for learning dance motions. The framework enables a humanoid robot to imitate dance motions captured from human demonstrations. This study especially focuses on leg motions to achieve a novel attempt in which a biped-ope robot imitates not only upper body motions but also leg motions including steps. Body differences between the robot and the original dancer make the problem difficult because the differences prevent the robot front straight forward-v following the original motions and they also change dynamic body balance. We propose leg task models, which play a key role in solving the problem. Low-level tasks in leg motion are modelled so that they clearly provide essential information required,for keeping dynamic stability and important motion characteristics. The models divide the problem of adapting motions into the problem of recognizing a sequence of the tasks and the problem of executing the task sequence. We have developed a method for recognizing the tasks from captured motion data and a method for generating the motions of the tasks that can be executed by existing robots including HRP-2. HRP-2 successfully performed the generated motions, which imitated a traditional folk dance performed by human dancers.
引用
收藏
页码:829 / 844
页数:16
相关论文
共 28 条
  • [1] Task model of lower body motion for a biped humanoid robot to imitate human dances
    Nakaoka, S
    Nakazawa, A
    Kanehiro, F
    Kaneko, K
    Morisawa, M
    Ikeuchi, K
    2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, : 2769 - 2774
  • [2] Generating whole body motions for a biped humanoid robot from captured human dances
    Nakaoka, S
    Nakazawa, A
    Yokoi, K
    Hirukawa, H
    Ikeuchi, K
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 3905 - 3910
  • [3] Learning biped locomotion from first principles on a simulated humanoid robot using linear genetic programming
    Wolff, K
    Nordin, P
    GENETIC AND EVOLUTIONARY COMPUTATION - GECCO 2003, PT I, PROCEEDINGS, 2003, 2723 : 495 - 506
  • [4] Learning Task Transition from Standing-up to Walking for A Squatted Bipedal Humanoid Robot
    Luo, Dingsheng
    Deng, Yian
    Han, Xiaoqiang
    Hu, Fan
    Wu, Xihong
    2016 IEEE-RAS 16TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2016, : 1251 - 1256
  • [5] Human-inspired robot task learning from human teaching
    Wu, Xianghai
    Kofman, Jonathan
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 3334 - 3339
  • [6] Incremental learning of humanoid robot behavior from natural interaction and large language models
    Baermann, Leonard
    Kartmann, Rainer
    Peller-Konrad, Fabian
    Niehues, Jan
    Waibel, Alex
    Asfour, Tamim
    FRONTIERS IN ROBOTICS AND AI, 2024, 11
  • [7] Enabling Embodied Human-Robot Co-Learning: Requirements, Method, and Test With Handover Task
    van Zoelen, Emma M.
    Veldman-Loopik, Hugo
    van den Bosch, Karel
    Neerincx, Mark
    Abbink, David A.
    Peternel, Luka
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (02): : 1425 - 1432
  • [8] Learning from Long-term and Multimodal Interaction between Human and Humanoid Robot
    Suzuki, Kenji
    Harada, Atsushi
    Suzuki, Tomoya
    IECON 2008: 34TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-5, PROCEEDINGS, 2008, : 3305 - 3310
  • [9] Robot Learning from Human Demonstration of Peg-in-Hole Task
    Wang, Peng
    Zhu, Jianxin
    Feng, Wei
    Ou, Yongsheng
    2018 IEEE 8TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (IEEE-CYBER), 2018, : 318 - 322
  • [10] Learning a Pick-and-Place Robot Task from Human Demonstration
    Lin, Hsien-, I
    Cheng, Chia-Hsien
    Chen, Wei-Kai
    2013 CACS INTERNATIONAL AUTOMATIC CONTROL CONFERENCE (CACS), 2013, : 312 - +