Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing

被引:3
|
作者
Yang, Max [1 ]
Lin, Yijiong [1 ]
Church, Alex [1 ]
Lloyd, John [1 ]
Zhang, Dandan [1 ]
Barton, David A. W. [1 ]
Lepora, Nathan F. [1 ]
机构
[1] Univ Bristol, Bristol Robot Lab, Dept Engn Math, Bristol BS8 1UB, England
基金
英国工程与自然科学研究理事会;
关键词
Force and tactile sensing; dexterous manipulation; reinforcement learning;
D O I
10.1109/LRA.2023.3295236
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Object pushing presents a key non-prehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning (RL) methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goal-conditioned formulation that allows both model-free and model-based RL to obtain accurate policies for pushing an object to a goal. To achieve real-world performance, we adopt a sim-to-real approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a model-free policy can outperform a model-based planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective real-world performance highlights the value of rich tactile information for fine manipulation.
引用
收藏
页码:5480 / 5487
页数:8
相关论文
共 50 条
  • [1] Bi-Touch: Bimanual Tactile Manipulation With Sim-to-Real Deep Reinforcement Learning
    Lin, Yijiong
    Church, Alex
    Yang, Max
    Li, Haoran
    Lloyd, John
    Zhang, Dandan
    Lepora, Nathan F.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5472 - 5479
  • [2] Model-based and Model-free Reinforcement Learning for Visual Servoing
    Farahmand, Amir Massoud
    Shademan, Azad
    Jagersand, Martin
    Szepesvari, Csaba
    [J]. ICRA: 2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-7, 2009, : 4135 - 4142
  • [3] Expert Initialized Hybrid Model-Based and Model-Free Reinforcement Learning
    Langaa, Jeppe
    Sloth, Christoffer
    [J]. 2023 EUROPEAN CONTROL CONFERENCE, ECC, 2023,
  • [4] Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics
    Massi, Elisa
    Barthelemy, Jeanne
    Mailly, Juliane
    Dromnelle, Remi
    Canitrot, Julien
    Poniatowski, Esther
    Girard, Benoit
    Khamassi, Mehdi
    [J]. FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [5] Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning
    Swazinna, Phillip
    Udluft, Steffen
    Hein, Daniel
    Runkler, Thomas
    [J]. IFAC PAPERSONLINE, 2022, 55 (15): : 19 - 26
  • [6] Hybrid control for combining model-based and model-free reinforcement learning
    Pinosky, Allison
    Abraham, Ian
    Broad, Alexander
    Argall, Brenna
    Murphey, Todd D.
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2023, 42 (06): : 337 - 355
  • [7] Sim-to-Real Transfer in Deep Reinforcement Learning for Robotics: a Survey
    Zhao, Wenshuai
    Queralta, Jorge Pena
    Westerlund, Tomi
    [J]. 2020 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2020, : 737 - 744
  • [8] Sim-to-Real in Reinforcement Learning for Everyone
    Vacaro, Juliano
    Marques, Guilherme
    Oliveira, Bruna
    Paz, Gabriel
    Paula, Thomas
    Staehler, Wagston
    Murphy, David
    [J]. 2019 LATIN AMERICAN ROBOTICS SYMPOSIUM, 2019 BRAZILIAN SYMPOSIUM ON ROBOTICS (SBR) AND 2019 WORKSHOP ON ROBOTICS IN EDUCATION (LARS-SBR-WRE 2019), 2019, : 305 - 310
  • [9] Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
    Nagabandi, Anusha
    Kahn, Gregory
    Fearing, Ronald S.
    Levine, Sergey
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 7579 - 7586
  • [10] Parallel model-based and model-free reinforcement learning for card sorting performance
    Steinke, Alexander
    Lange, Florian
    Kopp, Bruno
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)