Programming by demonstration of probabilistic decision making on a multi-modal service robot

被引:2
|
作者
Schmidt-Rohr, Sven R. [1 ]
Loesch, Martin [1 ]
Jaekel, Rainer [1 ]
Dillmann, Ruediger [1 ]
机构
[1] Karlsruhe Inst Technol, Inst Anthropomat IFA, Karlsruhe, Germany
关键词
D O I
10.1109/IROS.2010.5652268
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose a process which is able to generate abstract service robot mission representations, utilized during execution for autonomous, probabilistic decision making, by observing human demonstrations. The observation process is based on the same perceptive components as used by the robot during execution, recording dialog between humans, human motion as well as objects poses. This leads to a natural, practical learning process, avoiding extra demonstration centers or kinesthetic teaching. By generating mission models for probabilistic decision making as Partially observable Markov decision processes, the robot is able to deal with uncertain and dynamic environments, as encountered in real world settings during execution. Service robot missions in a cafeteria setting, including the modalities of mobility, natural human-robot interaction and object grasping, have been learned and executed by this system.
引用
下载
收藏
页码:784 / 789
页数:6
相关论文
共 50 条
  • [31] Learning Probabilistic Decision Making by a Service Robot with Generalization of User Demonstrations and Interactive Refinement
    Schmidt-Rohr, Sven R.
    Romahn, Fabian
    Meissner, Pascal
    Jaekel, Rainer
    Dillmann, Ruediger
    INTELLIGENT AUTONOMOUS SYSTEMS 12 , VOL 2, 2013, 194 : 369 - 382
  • [32] Toward the Flexible Automation for Robot Learning from Human Demonstration Using Multi-modal Perception Approach
    Chen, Jing-Hao
    Lu, Guan-Yi
    Chien, Yi-Hsing
    Chiang, Hsin-Han
    Wang, Wei-Yen
    Hsu, Chen-Chien
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), 2019, : 148 - 153
  • [33] A Stumble Detection Method for Programming with Multi-modal Information
    Oka, Hiroki
    Ohnishi, Ayumi
    Terada, Tsutomu
    Tsukamoto, Masahiko
    ADVANCES IN MOBILE COMPUTING AND MULTIMEDIA INTELLIGENCE, MOMM 2022, 2022, 13634 : 169 - 174
  • [34] A Modular Approach to Programming Multi-Modal Sensing Applications
    Abdelmoamen, Ahmed
    2018 IEEE INTERNATIONAL CONFERENCE ON COGNITIVE COMPUTING (ICCC), 2018, : 91 - 98
  • [35] Are You Sure? - Multi-Modal Human Decision Uncertainty Detection in Human-Robot Interaction
    Scherf, Lisa
    Gasche, Lisa Alina
    Chemangui, Eya
    Koert, Dorothea
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024, 2024, : 621 - 629
  • [36] Task allocation in robot systems with multi-modal capabilities
    Hojda, Maciej
    IFAC PAPERSONLINE, 2015, 48 (03): : 2109 - 2114
  • [37] Multi-modal human robot interaction for map generation
    Saito, H
    Ishimura, K
    Hattori, M
    Takamori, T
    SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5, 2002, : 2721 - 2724
  • [38] Multi-Modal People Tracking on a Mobile Companion Robot
    Volkhardt, Michael
    Weinrich, Christoph
    Gross, Horst-Michael
    2013 EUROPEAN CONFERENCE ON MOBILE ROBOTS (ECMR 2013), 2013, : 288 - 293
  • [39] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [40] A multi-modal object attention system for a mobile robot
    Haasch, A
    Hofemann, N
    Fritsch, J
    Sagerer, G
    2005 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4, 2005, : 1499 - 1504