Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing

被引:3
|
作者
Yang, Max [1 ]
Lin, Yijiong [1 ]
Church, Alex [1 ]
Lloyd, John [1 ]
Zhang, Dandan [1 ]
Barton, David A. W. [1 ]
Lepora, Nathan F. [1 ]
机构
[1] Univ Bristol, Bristol Robot Lab, Dept Engn Math, Bristol BS8 1UB, England
基金
英国工程与自然科学研究理事会;
关键词
Force and tactile sensing; dexterous manipulation; reinforcement learning;
D O I
10.1109/LRA.2023.3295236
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Object pushing presents a key non-prehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning (RL) methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goal-conditioned formulation that allows both model-free and model-based RL to obtain accurate policies for pushing an object to a goal. To achieve real-world performance, we adopt a sim-to-real approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a model-free policy can outperform a model-based planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective real-world performance highlights the value of rich tactile information for fine manipulation.
引用
收藏
页码:5480 / 5487
页数:8
相关论文
共 50 条
  • [31] Model-free voltage control of active distribution system with PVs using surrogate model-based deep reinforcement learning
    Cao, Di
    Zhao, Junbo
    Hu, Weihao
    Ding, Fei
    Yu, Nanpeng
    Huang, Qi
    Chen, Zhe
    [J]. APPLIED ENERGY, 2022, 306
  • [32] Model-Free and Model-Based Active Learning for Regression
    O'Neill, Jack
    Delany, Sarah Jane
    MacNamee, Brian
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, 2017, 513 : 375 - 386
  • [33] Cloud-Edge Training Architecture for Sim-to-Real Deep Reinforcement Learning
    Cao, Hongpeng
    Theile, Mirco
    Wyrwal, Federico G.
    Caccamo, Marco
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 9363 - 9370
  • [34] A Domain Data Pattern Randomization based Deep Reinforcement Learning method for Sim-to-Real transfer
    Gong, Peng
    Shi, Dianxi
    Xue, Chao
    Chen, Xucan
    [J]. 2021 5TH INTERNATIONAL CONFERENCE ON INNOVATION IN ARTIFICIAL INTELLIGENCE (ICIAI 2021), 2021, : 1 - 7
  • [35] Sim-to-real transfer of active suspension control using deep reinforcement learning
    Wiberg, Viktor
    Wallin, Erik
    Falldin, Arvid
    Semberg, Tobias
    Rossander, Morgan
    Wadbro, Eddie
    Servin, Martin
    [J]. ROBOTICS AND AUTONOMOUS SYSTEMS, 2024, 179
  • [36] Grounded action transformation for sim-to-real reinforcement learning
    Josiah P. Hanna
    Siddharth Desai
    Haresh Karnan
    Garrett Warnell
    Peter Stone
    [J]. Machine Learning, 2021, 110 : 2469 - 2499
  • [37] Comparative study of model-based and model-free reinforcement learning control performance in HVAC systems
    Gao, Cheng
    Wang, Dan
    [J]. JOURNAL OF BUILDING ENGINEERING, 2023, 74
  • [38] Combining Model-Based and Model-Free Reinforcement Learning Policies for More Efficient Sepsis Treatment
    Liu, Xiangyu
    Yu, Chao
    Huang, Qikai
    Wang, Luhao
    Wu, Jianfeng
    Guan, Xiangdong
    [J]. BIOINFORMATICS RESEARCH AND APPLICATIONS, ISBRA 2021, 2021, 13064 : 105 - 117
  • [39] Ventral Striatum and Orbitofrontal Cortex Are Both Required for Model-Based, But Not Model-Free, Reinforcement Learning
    McDannald, Michael A.
    Lucantonio, Federica
    Burke, Kathryn A.
    Niv, Yael
    Schoenbaum, Geoffrey
    [J]. JOURNAL OF NEUROSCIENCE, 2011, 31 (07): : 2700 - 2705
  • [40] Grounded action transformation for sim-to-real reinforcement learning
    Hanna, Josiah P.
    Desai, Siddharth
    Karnan, Haresh
    Warnell, Garrett
    Stone, Peter
    [J]. MACHINE LEARNING, 2021, 110 (09) : 2469 - 2499