Bootstrapping Humanoid Robot Skills by Extracting Semantic Representations of Human-like Activities from Virtual Reality

被引:0
|
作者
Ramirez-Amaro, Karinne [1 ]
Inamura, Tetsunari [2 ]
Dean-Leon, Emmanuel [1 ]
Beetz, Michael [3 ]
Cheng, Gordon [1 ]
机构
[1] Tech Univ Munich, Inst Cognit Syst, Fac Elect Engn, D-80290 Munich, Germany
[2] Natl Inst Informat, Tokyo, Japan
[3] Univ Bremen, Inst Artificial Intelligence, D-28359 Bremen, Germany
关键词
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Advancements in Virtual Reality have enabled well-defined and consistent virtual environments that can capture complex scenarios, such as human everyday activities. Additionally, virtual simulators (such as SIGVerse) are designed to be user-friendly mechanisms between virtual robots/agents and real users allowing a better interaction. We envision such rich scenarios can be used to train robots to learn new behaviors specially in human everyday activities where a diverse variability can be found. In this paper, we present a multi-level framework that is capable to use different input sources such as cameras and virtual environments to understand and execute the demonstrated activities. Our presented framework first obtains the semantic models of human activities from cameras, which are later tested using the SIGVerse virtual simulator to show new complex activities (such as, cleaning the table) using a virtual robot. Our introduced framework is integrated on a real robot, i.e. an iCub, which is capable to process the signals from the virtual environment to then understand the activities performed by the observed robot. This was realized through the use of previous knowledge and experiences that the robot has learned from observing humans activities. Our results show that our framework was able to extract the meaning of the observed motions with 80 % accuracy of recognition by obtaining the objects relationships given the current context via semantic representations to extract high-level understanding of those complex activities even when they represent different behaviors.
引用
收藏
页码:438 / 443
页数:6
相关论文
共 50 条
  • [41] Generating human-like motions for an underactuated three-link robot based on the virtual constraints approach
    Mettin, Uwe
    La Hera, Pedro
    Freidovich, Leonid
    Shiriaev, Anton
    PROCEEDINGS OF THE 46TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-14, 2007, : 4846 - 4851
  • [42] The Robot as Scientist: Using Mental Simulation to Test Causal Hypotheses Extracted from Human Activities in Virtual Reality
    Uhde, Constantin
    Berberich, Nicolas
    Ramirez-Amaro, Karinne
    Cheng, Gordon
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 8081 - 8086
  • [43] Extracting Human-Like Driving Behaviors From Expert Driver Data Using Deep Learning
    Sama, Kyle
    Morales, Yoichi
    Liu, Hailong
    Akai, Naoki
    Carballo, Alexander
    Takeuchi, Eijiro
    Takeda, Kazuya
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (09) : 9315 - 9329
  • [44] Venturing into the uncanny valley of mind The influence of mind attribution on the acceptance of human-like characters in a virtual reality setting
    Stein, Jan-Philipp
    Ohler, Peter
    COGNITION, 2017, 160 : 43 - 50
  • [45] Implementation of human-like driving skills by autonomous fuzzy behavior control on an FPGA-based car-like mobile robot
    Li, THS
    Chang, SJ
    Chen, YX
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2003, 50 (05) : 867 - 880
  • [46] Accelerating Humanoid Robot Learning from Human Action Skills Using Context-Aware Middleware
    Phiri, Charles C.
    Ju, Zhaojie
    Liu, Honghai
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2016, PT I, 2016, 9834 : 563 - 574
  • [47] Development of an anthropomorphic head-eye system for a humanoid robot - Realization of human-like head-eye motion using eyelids adjusting to brightness
    Takanishi, A
    Hirano, S
    Sato, K
    1998 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-4, 1998, : 1308 - 1314
  • [48] Teaching a Robot to Grasp Real Fish by Imitation Learning from a Human Supervisor in Virtual Reality
    Dyrstad, Jonatan S.
    Oye, Elling Ruud
    Stahl, Annette
    Mathiassen, John Reidar
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 7185 - 7192
  • [49] Non-homologous recombination of deoxyribonucleoside kinases from human and Drosophila melanogaster yields human-like enzymes with novel activities
    Gerth, Monica L.
    Lutz, Stefan
    JOURNAL OF MOLECULAR BIOLOGY, 2007, 370 (04) : 742 - 751
  • [50] The transfer from survey (map-like) to route representations into Virtual Reality Mazes: effect of age and cerebral lesion
    Carelli, Laura
    Rusconi, Maria Luisa
    Scarabelli, Chiara
    Stampatori, Chiara
    Mattioli, Flavia
    Riva, Giuseppe
    JOURNAL OF NEUROENGINEERING AND REHABILITATION, 2011, 8