Programming by demonstration of probabilistic decision making on a multi-modal service robot

被引:2
|
作者
Schmidt-Rohr, Sven R. [1 ]
Loesch, Martin [1 ]
Jaekel, Rainer [1 ]
Dillmann, Ruediger [1 ]
机构
[1] Karlsruhe Inst Technol, Inst Anthropomat IFA, Karlsruhe, Germany
关键词
D O I
10.1109/IROS.2010.5652268
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we propose a process which is able to generate abstract service robot mission representations, utilized during execution for autonomous, probabilistic decision making, by observing human demonstrations. The observation process is based on the same perceptive components as used by the robot during execution, recording dialog between humans, human motion as well as objects poses. This leads to a natural, practical learning process, avoiding extra demonstration centers or kinesthetic teaching. By generating mission models for probabilistic decision making as Partially observable Markov decision processes, the robot is able to deal with uncertain and dynamic environments, as encountered in real world settings during execution. Service robot missions in a cafeteria setting, including the modalities of mobility, natural human-robot interaction and object grasping, have been learned and executed by this system.
引用
下载
收藏
页码:784 / 789
页数:6
相关论文
共 50 条
  • [41] A Flying Robot with Adaptive Morphology for Multi-Modal Locomotion
    Daler, Ludovic
    Lecoeur, Julien
    Haehlen, Patrizia Bernadette
    Floreano, Dario
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 1361 - 1366
  • [42] Multi-modal human robot interaction for map generation
    Ghidary, SS
    Nakata, Y
    Saito, H
    Hattori, M
    Takamori, T
    IROS 2001: PROCEEDINGS OF THE 2001 IEEE/RJS INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-4: EXPANDING THE SOCIETAL ROLE OF ROBOTICS IN THE NEXT MILLENNIUM, 2001, : 2246 - 2251
  • [43] Running in the horizontal plane with a multi-modal dynamical robot
    Miller, Bruce
    Clark, Jonathan
    Darnell, Asa
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 3335 - 3341
  • [44] Emotional Models for Multi-modal Communication of Robot Partners
    Yorita, Akihiro
    Botzheim, Janos
    Kubota, Naoyuki
    2013 IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS (ISIE), 2013,
  • [45] Perception and Decision-Making for Multi-Modal Interaction Based on Fuzzy Theory in the Dynamic Environment
    Zhang, Jie
    Wang, Shuxia
    He, Weiping
    Li, Jianghong
    Wu, Shixin
    Cao, Zhiwei
    Wang, Manxian
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2023,
  • [46] Enhancing Multi-Modal Perception and Interaction: An Augmented Reality Visualization System for Complex Decision Making
    Chen, Liru
    Zhao, Hantao
    Shi, Chenhui
    Wu, Youbo
    Yu, Xuewen
    Ren, Wenze
    Zhang, Ziyi
    Shi, Xiaomeng
    SYSTEMS, 2024, 12 (01):
  • [47] Wearable Multi-modal Interface for Human Multi-robot Interaction
    Gromov, Boris
    Gambardella, Luca M.
    Di Caro, Gianni A.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY, AND RESCUE ROBOTICS (SSRR), 2016, : 240 - 245
  • [48] Multiview facial feature tracking with a multi-modal probabilistic model
    Tong, Yan
    Ji, Qiang
    18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2006, : 307 - +
  • [49] DiPA: Probabilistic Multi-Modal Interactive Prediction for Autonomous Driving
    Knittel, Anthony
    Hawasly, Majd
    Albrecht, Stefano V.
    Redford, John
    Ramamoorthy, Subramanian
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (08) : 4887 - 4894
  • [50] Kernel Trajectory Maps for Multi-Modal Probabilistic Motion Prediction
    Zhi, Weiming
    Ott, Lionel
    Ramos, Fabio
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100