Bootstrapping Humanoid Robot Skills by Extracting Semantic Representations of Human-like Activities from Virtual Reality

被引:0
|
作者
Ramirez-Amaro, Karinne [1 ]
Inamura, Tetsunari [2 ]
Dean-Leon, Emmanuel [1 ]
Beetz, Michael [3 ]
Cheng, Gordon [1 ]
机构
[1] Tech Univ Munich, Inst Cognit Syst, Fac Elect Engn, D-80290 Munich, Germany
[2] Natl Inst Informat, Tokyo, Japan
[3] Univ Bremen, Inst Artificial Intelligence, D-28359 Bremen, Germany
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Advancements in Virtual Reality have enabled well-defined and consistent virtual environments that can capture complex scenarios, such as human everyday activities. Additionally, virtual simulators (such as SIGVerse) are designed to be user-friendly mechanisms between virtual robots/agents and real users allowing a better interaction. We envision such rich scenarios can be used to train robots to learn new behaviors specially in human everyday activities where a diverse variability can be found. In this paper, we present a multi-level framework that is capable to use different input sources such as cameras and virtual environments to understand and execute the demonstrated activities. Our presented framework first obtains the semantic models of human activities from cameras, which are later tested using the SIGVerse virtual simulator to show new complex activities (such as, cleaning the table) using a virtual robot. Our introduced framework is integrated on a real robot, i.e. an iCub, which is capable to process the signals from the virtual environment to then understand the activities performed by the observed robot. This was realized through the use of previous knowledge and experiences that the robot has learned from observing humans activities. Our results show that our framework was able to extract the meaning of the observed motions with 80 % accuracy of recognition by obtaining the objects relationships given the current context via semantic representations to extract high-level understanding of those complex activities even when they represent different behaviors.
引用
收藏
页码:438 / 443
页数:6
相关论文
共 50 条
  • [31] New Shank Mechanism for Humanoid Robot Mimicking Human-like Walking in Horizontal and Frontal Plane
    Otani, T.
    Iizuka, A.
    Takamoto, D.
    Motohashi, H.
    Kishi, T.
    Kryczka, P.
    Endo, N.
    Jamone, L.
    Hashimoto, K.
    Takashima, T.
    Lim, H. O.
    Takanishi, A.
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 667 - 672
  • [32] A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution
    Di Fu
    Fares Abawi
    Hugo Carneiro
    Matthias Kerzel
    Ziwei Chen
    Erik Strahl
    Xun Liu
    Stefan Wermter
    International Journal of Social Robotics, 2023, 15 : 1325 - 1340
  • [33] Virtual reality based programming of human-like torch operation for robotic welding
    Hu, Yijie
    Xiao, Jun
    Chen, Shujun
    Gai, Shengnan
    WELDING IN THE WORLD, 2025, : 1447 - 1458
  • [34] Robot Learning Method for Human-like Arm Skills Based on the Hybrid Primitive Framework
    Li, Jiaxin
    Han, Hasiaoqier
    Hu, Jinxin
    Lin, Junwei
    Li, Peiyi
    SENSORS, 2024, 24 (12)
  • [35] Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot
    Ramirez-Amaro, Karinne
    Beetz, Michael
    Cheng, Gordon
    ADVANCED ROBOTICS, 2015, 29 (05) : 345 - 362
  • [36] Virtual Reality Teleoperation of a Humanoid Robot Using Markerless Human Upper Body Pose Imitation
    Hirschmanner, Matthias
    Tsiourti, Christiana
    Patten, Timothy
    Vincze, Markus
    2019 IEEE-RAS 19TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2019, : 259 - 265
  • [37] Human-like motion of a humanoid robot arm based on a closed-form solution of the inverse kinematics problem
    Asfour, T
    Dillmann, R
    IROS 2003: PROCEEDINGS OF THE 2003 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2003, : 1407 - 1412
  • [38] Human-like walking with knee stretched, heel-contact and toe-off motion by a humanoid robot
    Ogura, Yu
    Shimomura, Kazushi
    Kondo, Hideki
    Morishima, Akitoshi
    Okubo, Tatsu
    Momoki, Shimpei
    Lim, Hun-ok
    Takanishi, Atsuo
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 3976 - +
  • [39] Human-like behavior of robot arms: general considerations and the handwriting task - Part I: mathematical description of human-like motion: distributed positioning and virtual fatigue
    Potkonjak, V
    Tzafestas, S
    Kostic, D
    Djordjevic, G
    ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2001, 17 (04) : 305 - 315
  • [40] A New Humanoid Architecture for Social Interaction between Human and a Robot Expressing Human-Like Emotions Using an Android Mobile Device as Interface
    Chella, Antonio
    Sorbello, Rosario
    Pilato, Giovanni
    Vassallo, Giorgio
    Giardina, Marcello
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES 2012, 2013, 196 : 95 - +