Bootstrapping Humanoid Robot Skills by Extracting Semantic Representations of Human-like Activities from Virtual Reality

被引:0
|
作者
Ramirez-Amaro, Karinne [1 ]
Inamura, Tetsunari [2 ]
Dean-Leon, Emmanuel [1 ]
Beetz, Michael [3 ]
Cheng, Gordon [1 ]
机构
[1] Tech Univ Munich, Inst Cognit Syst, Fac Elect Engn, D-80290 Munich, Germany
[2] Natl Inst Informat, Tokyo, Japan
[3] Univ Bremen, Inst Artificial Intelligence, D-28359 Bremen, Germany
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Advancements in Virtual Reality have enabled well-defined and consistent virtual environments that can capture complex scenarios, such as human everyday activities. Additionally, virtual simulators (such as SIGVerse) are designed to be user-friendly mechanisms between virtual robots/agents and real users allowing a better interaction. We envision such rich scenarios can be used to train robots to learn new behaviors specially in human everyday activities where a diverse variability can be found. In this paper, we present a multi-level framework that is capable to use different input sources such as cameras and virtual environments to understand and execute the demonstrated activities. Our presented framework first obtains the semantic models of human activities from cameras, which are later tested using the SIGVerse virtual simulator to show new complex activities (such as, cleaning the table) using a virtual robot. Our introduced framework is integrated on a real robot, i.e. an iCub, which is capable to process the signals from the virtual environment to then understand the activities performed by the observed robot. This was realized through the use of previous knowledge and experiences that the robot has learned from observing humans activities. Our results show that our framework was able to extract the meaning of the observed motions with 80 % accuracy of recognition by obtaining the objects relationships given the current context via semantic representations to extract high-level understanding of those complex activities even when they represent different behaviors.
引用
收藏
页码:438 / 443
页数:6
相关论文
共 50 条
  • [21] Generating Human-Like Social Motion in a Human-Looking Humanoid Robot: The Biomimetic Approach
    Rahman, S. M. Mizanoor
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2013, : 1377 - 1383
  • [22] Human-like Facial Expression Imitation for Humanoid Robot based on Recurrent Neural Network
    Huang, Zhong
    Ren, Fuji
    Bao, Yanwei
    IEEE ICARM 2016 - 2016 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM), 2016, : 306 - 311
  • [23] A human-like real-time grasp synthesis method for humanoid robot hands
    Lim, MS
    Oh, SR
    Son, J
    You, BJ
    Kim, KB
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2000, 30 (03) : 261 - 271
  • [24] Making planned paths look more human-like in humanoid robot manipulation planning
    Zacharias, F.
    Schlette, C.
    Schmidt, F.
    Borst, C.
    Rossmann, J.
    Hirzinger, G.
    2011 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2011, : 1192 - 1198
  • [25] Human-Like Sequential Learning of Escape Routes for Virtual Reality Agents
    Danial, Syed Nasir
    Smith, Jennifer
    Khan, Faisal
    Veitch, Brian
    FIRE TECHNOLOGY, 2019, 55 (03) : 1057 - 1083
  • [26] Human-Like Sequential Learning of Escape Routes for Virtual Reality Agents
    Syed Nasir Danial
    Jennifer Smith
    Faisal Khan
    Brian Veitch
    Fire Technology, 2019, 55 : 1057 - 1083
  • [27] Towards Human-Like Learning Dynamics in a Simulated Humanoid Robot for Improved Human-Machine Teaming
    Akshay
    Chen, Xulin
    He, Borui
    Katz, Garrett E.
    AUGMENTED COGNITION, AC 2022, 2022, 13310 : 225 - 241
  • [28] A Human-Like Learning Framework of Robot Interaction Skills Based on Environmental Dynamics
    Liu, Hanzhong
    Yang, Chenguang
    Dai, Shi-Lu
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2021, PT IV, 2021, 13016 : 606 - 616
  • [29] Visuo-motor coordination of a humanoid robot head with human-like vision in face tracking
    Laschi, C
    Miwa, H
    Takanishi, A
    Guglielmelli, E
    Dario, P
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 232 - 237
  • [30] A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution
    Fu, Di
    Abawi, Fares
    Carneiro, Hugo
    Kerzel, Matthias
    Chen, Ziwei
    Strahl, Erik
    Liu, Xun
    Wermter, Stefan
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2023, 15 (8) : 1325 - 1340