Imitation Learning from a Single Demonstration Leveraging Vector Quantization for Robotic Harvesting

被引:0
|
作者
Porichis, Antonios [1 ,2 ]
Inglezou, Myrto [1 ]
Kegkeroglou, Nikolaos [3 ]
Mohan, Vishwanathan [1 ]
Chatzakos, Panagiotis [1 ]
机构
[1] Univ Essex, AI Innovat Ctr, Wivenhoe Pk, Colchester CO4 3SQ, England
[2] Natl Struct Integr Res Ctr, Granta Pk, Cambridge CB21 6AL, England
[3] TWI Hellas, 280 Kifisias Ave, Halandri 15232, Greece
基金
欧盟地平线“2020”;
关键词
imitation learning; learning by demonstration; vector quantization; mushroom harvesting; visual servoing;
D O I
10.3390/robotics13070098
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The ability of robots to tackle complex non-repetitive tasks will be key in bringing a new level of automation in agricultural applications still involving labor-intensive, menial, and physically demanding activities due to high cognitive requirements. Harvesting is one such example as it requires a combination of motions which can generally be broken down into a visual servoing and a manipulation phase, with the latter often being straightforward to pre-program. In this work, we focus on the task of fresh mushroom harvesting which is still conducted manually by human pickers due to its high complexity. A key challenge is to enable harvesting with low-cost hardware and mechanical systems, such as soft grippers which present additional challenges compared to their rigid counterparts. We devise an Imitation Learning model pipeline utilizing Vector Quantization to learn quantized embeddings directly from visual inputs. We test this approach in a realistic environment designed based on recordings of human experts harvesting real mushrooms. Our models can control a cartesian robot with a soft, pneumatically actuated gripper to successfully replicate the mushroom outrooting sequence. We achieve 100% success in picking mushrooms among distractors with less than 20 min of data collection comprising a single expert demonstration and auxiliary, non-expert, trajectories. The entire model pipeline requires less than 40 min of training on a single A4000 GPU and approx. 20 ms for inference on a standard laptop GPU.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] A Learning-from-Demonstration Based Framework for Robotic Manipulators Sorting Task
    Zhang, Yahui
    Ou, Yongsheng
    Liu, Guodong
    Xu, Tiantian
    2018 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS), 2018, : 42 - 47
  • [42] Encoding Multiple Sensor Data for Robotic Learning Skills From Multimodal Demonstration
    Zeng, Chao
    Yang, Chenguang
    Zhong, Junpei
    Zhang, Jianwei
    IEEE ACCESS, 2019, 7 : 145604 - 145613
  • [43] Augmenting Policy Learning with Routines Discovered from a Single Demonstration
    Zhao, Zelin
    Gan, Chuang
    Wu, Jiajun
    Guo, Xiaoxiao
    Tenenbaum, Joshua
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11024 - 11032
  • [44] Learning and Correcting Robot Trajectory Keypoints from a Single Demonstration
    Iturrate, Inigo
    Ostergaard, Esben Hallundbaek
    Rytter, Martin
    Savarimuthu, Thiusius Rajeeth
    2017 3RD INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2017, : 52 - 59
  • [45] Learning to Grasp Arbitrary Household Objects from a Single Demonstration
    De Coninck, Elias
    Verbelen, Tim
    Van Molle, Pieter
    Simoens, Pieter
    Dhoedt, Bart
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 2372 - 2377
  • [46] A User-Centered Shared Control Scheme with Learning from Demonstration for Robotic Surgery
    Huang, Yanpei (yanpei.huang@sussex.ac.uk), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [47] A Handheld Forceps-Like Tracker for Learning Robotic Surgical Tasks from Demonstration
    Iturrate, Inigo
    Schwaner, Kim Lindberg
    Savarimuthu, Thiusius Rajeeth
    Cheng, Zhuoqi
    ADVANCES IN SERVICE AND INDUSTRIAL ROBOTICS, RAAD 2023, 2023, 135 : 229 - 236
  • [48] Remote Robotic Laboratories for Learning from Demonstration Enabling user interaction and shared experimentation
    Osentoski, Sarah
    Pitzer, Benjamin
    Crick, Christopher
    Jay, Graylin
    Dong, Shuonan
    Grollman, Daniel
    Suay, Halit Bener
    Jenkins, Odest Chadwicke
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2012, 4 (04) : 449 - 461
  • [49] Feedback Motion Planning and Learning from Demonstration in Physical Robotic Assistance: Differences and Synergies
    Lawitzky, Martin
    Medina, Jose Ramon
    Lee, Dongheui
    Hirche, Sandra
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 3646 - 3652
  • [50] LfDT: Learning Dual-Arm Manipulation from Demonstration Translated from a Human and Robotic Arm
    Kobayashi, Masato
    Yamada, Jun
    Hamaya, Masashi
    Tanaka, Kazutoshi
    2023 IEEE-RAS 22ND INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS, HUMANOIDS, 2023,