Semantic learning from keyframe demonstration using object attribute constraints

被引:0
|
作者
Sen, Busra [1 ]
Elfring, Jos [1 ]
Torta, Elena [1 ]
van de Molengraft, Rene [1 ]
机构
[1] Eindhoven Univ Technol, Dept Mech Engn, Eindhoven, Netherlands
来源
关键词
learning from demonstration; keyframe demonstrations; object attributes; task goal learning; semantic learning; ROBOT; REPRESENTATIONS;
D O I
10.3389/frobt.2024.1340334
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Learning from demonstration is an approach that allows users to personalize a robot's tasks. While demonstrations often focus on conveying the robot's motion or task plans, they can also communicate user intentions through object attributes in manipulation tasks. For instance, users might want to teach a robot to sort fruits and vegetables into separate boxes or to place cups next to plates of matching colors. This paper introduces a novel method that enables robots to learn the semantics of user demonstrations, with a particular emphasis on the relationships between object attributes. In our approach, users demonstrate essential task steps by manually guiding the robot through the necessary sequence of poses. We reduce the amount of data by utilizing only robot poses instead of trajectories, allowing us to focus on the task's goals, specifically the objects related to these goals. At each step, known as a keyframe, we record the end-effector pose, object poses, and object attributes. However, the number of keyframes saved in each demonstration can vary due to the user's decisions. This variability in each demonstration can lead to inconsistencies in the significance of keyframes, complicating keyframe alignment to generalize the robot's motion and the user's intention. Our method addresses this issue by focusing on teaching the higher-level goals of the task using only the required keyframes and relevant objects. It aims to teach the rationale behind object selection for a task and generalize this reasoning to environments with previously unseen objects. We validate our proposed method by conducting three manipulation tasks aiming at different object attribute constraints. In the reproduction phase, we demonstrate that even when the robot encounters previously unseen objects, it can generalize the user's intention and execute the task.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Pagelearn: Learning semantic functions of attribute grammars in parallel
    Szilágyi, Gyöngyi
    Thanos, Aggelos M.
    Journal of Computing and Information Technology, 2000, 8 (02) : 115 - 130
  • [32] Learning from Demonstration Based on a Mechanism to Utilize an Object's Invisibility
    Nagahama, Kotaro
    Yamazaki, Kimitoshi
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6120 - 6127
  • [33] Safe and Robust Robot Learning from Demonstration through Conceptual Constraints
    Mueller, Carl L.
    Hayes, Bradley
    HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 588 - 590
  • [34] Learning Task Constraints from Demonstration for Hybrid Force/Position Control
    Conkey, Adam
    Hermans, Tucker
    2019 IEEE-RAS 19TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2019, : 162 - 169
  • [35] Dynamic learning from multiple examples for semantic object segmentation and search
    Xu, YW
    Saber, E
    Tekalp, AM
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2004, 95 (03) : 334 - 353
  • [36] FROM OBJECT-ATTRIBUTE-RELATION SEMANTIC REPRESENTATION TO VIDEO GENERATION: A MULTIPLE VARIATIONAL AUTOENCODER APPROACH
    Duan, Yiping
    Li, Mingzhe
    Wen, Lijia
    Yang, Qianqian
    Tao, Xiaoming
    2022 IEEE 32ND INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2022,
  • [37] From Object Algebras to Attribute Grammars
    Rendel, Tillmann
    Brachthaeuser, Jonathan Immanuel
    Ostermann, Klaus
    ACM SIGPLAN NOTICES, 2014, 49 (10) : 377 - 395
  • [38] An efficient inductive learning method for object-oriented database using attribute entropy
    Huang, YM
    Lin, SH
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 1996, 8 (06) : 946 - 951
  • [39] Keyframe-based Learning from DemonstrationMethod and Evaluation
    Baris Akgun
    Maya Cakmak
    Karl Jiang
    Andrea L. Thomaz
    International Journal of Social Robotics, 2012, 4 : 343 - 355
  • [40] A Joint Learning Framework for Attribute Models and Object Descriptions
    Mahajan, Dhruv
    Sellamanickam, Sundararajan
    Nair, Vinod
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 1227 - 1234