Learning From Visual Demonstrations via Replayed Task-Contrastive Model-Agnostic Meta-Learning

被引:1
|
作者
Hu, Ziye [1 ]
Li, Wei [1 ,2 ]
Gan, Zhongxue [1 ,2 ]
Guo, Weikun [1 ]
Zhu, Jiwei [1 ]
Wen, James Zhiqing [2 ]
Zhou, Decheng [2 ]
机构
[1] Fudan Univ, Acad Engn & Technol, Shanghai 200433, Peoples R China
[2] Ji Hua Lab, Ctr Intelligent Robot, Dept Engn Res, Foshan 528200, Guangdong, Peoples R China
关键词
Task analysis; Robots; Microstrip; Visualization; Adaptation models; Training; Reinforcement learning; Meta-learning; learning from demonstrations; one-shot visual imitation learning; learning to learn;
D O I
10.1109/TCSVT.2022.3197147
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
With the increasing application of versatile robotics, the need for end-users to teach robotic tasks via visual/video demonstrations in different environments is increasing fast. One possible method is meta-learning. However, most meta-learning methods are tailored for image classification or just focus on teaching the robot what to do, resulting in a limited ability of the robot to adapt to the real world. Thus, we propose a novel yet efficient model-agnostic meta-learning framework based on task-contrastive learning to teach the robot what to do and what not to do through positive and negative demonstrations. Our approach divides the learning procedure from visual/video demonstrations into three parts. The first part distinguishes between positive and negative demonstrations via task-contrastive learning. The second part emphasizes what the positive demo is doing, and the last part predicts what the robot needs to do. Finally, we demonstrate the effectiveness of our meta-learning approach on 1) two standard public simulated benchmarks and 2) real-world placing experiments using a UR5 robot arm, significantly outperforming current related state-of-the-art methods.
引用
收藏
页码:8756 / 8767
页数:12
相关论文
共 50 条
  • [31] Specific Emitter Identification With Limited Samples: A Model-Agnostic Meta-Learning Approach
    Yang, Ning
    Zhang, Bangning
    Ding, Guoru
    Wei, Yimin
    Wei, Guofeng
    Wang, Jian
    Guo, Daoxing
    [J]. IEEE COMMUNICATIONS LETTERS, 2022, 26 (02) : 345 - 349
  • [32] Few-shot RUL estimation based on model-agnostic meta-learning
    Mo, Yu
    Li, Liang
    Huang, Biqing
    Li, Xiu
    [J]. JOURNAL OF INTELLIGENT MANUFACTURING, 2023, 34 (05) : 2359 - 2372
  • [33] Domain-Invariant Speaker Vector Projection by Model-Agnostic Meta-Learning
    Kang, Jiawen
    Liu, Ruiqi
    Li, Lantian
    Cai, Yunqi
    Wang, Dong
    Zheng, Thomas Fang
    [J]. INTERSPEECH 2020, 2020, : 3825 - 3829
  • [34] Few-shot RUL estimation based on model-agnostic meta-learning
    Yu Mo
    Liang Li
    Biqing Huang
    Xiu Li
    [J]. Journal of Intelligent Manufacturing, 2023, 34 : 2359 - 2372
  • [35] Stochastic Deep Networks with Linear Competing Units for Model-Agnostic Meta-Learning
    Kalais, Konstantinos
    Chatzis, Sotirios
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10586 - 10597
  • [36] Task Agnostic Meta-Learning for Few-Shot Learning
    Jamal, Muhammad Abdullah
    Qi, Guo-Jun
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11711 - 11719
  • [37] Kronecker-factored Approximate Curvature with adaptive learning rate for optimizing model-agnostic meta-learning
    Zhang, Ce
    Yao, Xiao
    Shi, Changfeng
    Gu, Min
    [J]. MULTIMEDIA SYSTEMS, 2023,
  • [38] Convolutional Shrinkage Neural Networks Based Model-Agnostic Meta-Learning for Few-Shot Learning
    Yunpeng He
    Chuanzhi Zang
    Peng Zeng
    Qingwei Dong
    Ding Liu
    Yuqi Liu
    [J]. Neural Processing Letters, 2023, 55 : 505 - 518
  • [39] Model-Agnostic Learning to Meta-Learn
    Devos, Arnout
    Dandi, Yatin
    [J]. NEURIPS 2020 WORKSHOP ON PRE-REGISTRATION IN MACHINE LEARNING, VOL 148, 2020, 148 : 155 - 175
  • [40] Kronecker-factored Approximate Curvature with adaptive learning rate for optimizing model-agnostic meta-learning
    Zhang, Ce
    Yao, Xiao
    Shi, Changfeng
    Gu, Min
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (06) : 3169 - 3177