SoMoGym: A Toolkit for Developing and Evaluating Controllers and Reinforcement Learning Algorithms for Soft Robots

被引:15
|
作者
Graule, Moritz A. [1 ]
McCarthy, Thomas P. [1 ]
Teeple, Clark B. [1 ]
Werfel, Justin [1 ]
Wood, Robert J. [1 ]
机构
[1] Harvard Univ, John A Paulson Sch Engn & Appl Sci, Allston, MA 02134 USA
基金
美国国家科学基金会;
关键词
Task analysis; Robots; Benchmark testing; Soft robotics; Manipulators; Actuators; Training; Soft robot applications; modeling; control; and learning for soft robots; reinforcement learning; HAND;
D O I
10.1109/LRA.2022.3149580
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Soft robotsoffer a host of benefits over traditional rigid robots, including inherent compliance that lets them passively adapt to variable environments and operate safely around humans and fragile objects. However, that same compliance makes it hard to use model-based methods in planning tasks requiring high precision or complex actuation sequences. Reinforcement learning (RL) can potentially find effective control policies, but training RL using physical soft robots is often infeasible, and training using simulations has had a high barrier to adoption. To accelerate research in control and RL for soft robotic systems, we introduce SoMoGym (Soft Motion Gym), a software toolkit that facilitates training and evaluating controllers for continuum robots. SoMoGym provides a set of benchmark tasks in which soft robots interact with various objects and environments. It allows evaluation of performance on these tasks for controllers of interest, and enables the use of RL to generate new controllers. Custom environments and robots can likewise be added easily. We provide and evaluate baseline RL policies for each of the benchmark tasks. These results show that SoMoGym enables the use of RL for continuum robots, a class of robots not covered by existing benchmarks, giving them the capability to autonomously solve tasks that were previously unattainable.
引用
收藏
页码:4071 / 4078
页数:8
相关论文
共 40 条
  • [21] Behavior learning and evolution of collective autonomous mobile robots based on reinforcement learning and distributed genetic algorithms
    Jun, HB
    Sim, KB
    RO-MAN '97 SENDAI: 6TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 1997, : 248 - 253
  • [22] Obstacle-Aware Navigation of Soft Growing Robots via Deep Reinforcement Learning
    El-Hussieny, Haitham
    Hameed, Ibrahim A.
    IEEE ACCESS, 2024, 12 : 38192 - 38201
  • [23] Electromechanical Platform with Removable Overlay for Exploring, Tuning and Evaluating Reinforcement Learning Algorithms
    Tan, Thye Lye Kelvin
    2021 INTERNATIONAL SYMPOSIUM ON COMPUTER SCIENCE AND INTELLIGENT CONTROLS (ISCSIC 2021), 2021, : 102 - 108
  • [24] Tailor-Made Reinforcement Learning Approach With Advanced Noise Optimization for Soft Continuum Robots
    Jayan, Jino
    Lal Priya, P.S.
    Hari Kumar, R.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (11): : 5509 - 5518
  • [25] Continual Policy Distillation of Reinforcement Learning-based Controllers for Soft Robotic In-Hand Manipulation
    Li, Lanpei
    Donato, Enrico
    Lomonaco, Vincenzo
    Falotico, Egidio
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON SOFT ROBOTICS, ROBOSOFT, 2024, : 1026 - 1033
  • [26] Beyond Expected Return: Accounting for Policy Reproducibility When Evaluating Reinforcement Learning Algorithms
    Flageat, Manon
    Lim, Bryan
    Cully, Antoine
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 11, 2024, : 12024 - 12032
  • [27] Developing Train Station Parking Algorithms: New Frameworks Based on Fuzzy Reinforcement Learning
    Li, Wei
    Xian, Kai
    Yin, Jiateng
    Chen, Dewang
    Journal of Advanced Transportation, 2019, 2019
  • [28] Developing Train Station Parking Algorithms: New Frameworks Based on Fuzzy Reinforcement Learning
    Li, Wei
    Xian, Kai
    Yin, Jiateng
    Chen, Dewang
    JOURNAL OF ADVANCED TRANSPORTATION, 2019, 2019
  • [29] Development of a digital twin environment for smart collision avoidance algorithms for mobile robots using reinforcement learning
    Matsumoto, Natsumi
    Kobayashi, Kazuyuki
    Ohkubo, Tomoyuki
    Tian, Kaiqiao
    Sebi, Nashwan J.
    Cheok, Ka C.
    Cai, Changqing
    2023 62ND ANNUAL CONFERENCE OF THE SOCIETY OF INSTRUMENT AND CONTROL ENGINEERS, SICE, 2023, : 1376 - 1381
  • [30] Innovative energy solutions: Evaluating reinforcement learning algorithms for battery storage optimization in residential settings
    Dou, Zhenlan
    Zhang, Chunyan
    Li, Junqiang
    Li, Dezhi
    Wang, Miao
    Sun, Lue
    Wang, Yong
    PROCESS SAFETY AND ENVIRONMENTAL PROTECTION, 2024, 191 : 2203 - 2221