A Modular Simulation Platform for Training Robots via Deep Reinforcement Learning and Multibody Dynamics

被引:0
|
作者
Benatti, Simone [1 ]
Tasora, Alessandro [1 ]
Fusai, Dario [1 ]
Mangoni, Dario [1 ]
机构
[1] Univ Parma, Dept Engn & Architecture, Parco Area Sci 181-A, Parma, Italy
关键词
Physical Simulation; Multibody Simulation; Reinforcement Learning; Deep Learning; Neural Networks; Robotics; Control;
D O I
10.1145/3365265.3365274
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work we focus on the role of Multibody Simulation in creating Reinforcement Learning virtual environments for robotic manipulation, showing a versatile, efficient and open source toolchain to create directly from CAD models. Using the Chrono::Solidworks plugin we are able to create robotic environments in the 3D CAD software Solidworks (R) and later convert them into PyChrono models (PyChrono is an open source Python module for multibody simulation). In addition, we demonstrate how collision detection can be made more efficient by introducing a limited number of contact primitives instead of performing collision detection and evaluation on complex 3D meshes, still reaching a policy able to avoid unwanted collisions. We tested this approach on a 6DOF robot Comau Racer3: the robot, together with a 2 fingers gripper (Hand-E by Robotiq) was modelled using Solidworks (R), imported as a PyChrono model and then a NN was trained in simulation to control its motor torques to reach a target position. To demonstrate the versatility of this toolchain we also repeated the same procedure to model and then train the ABB IRB 120 robotic arm.
引用
收藏
页码:7 / 11
页数:5
相关论文
共 50 条
  • [41] MBSNet: A deep learning model for multibody dynamics simulation and its application to a vehicle-track system
    Ye, Yunguang
    Huang, Ping
    Sun, Yu
    Shi, Dachuan
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2021, 157
  • [42] CarAware: A Deep Reinforcement Learning Platform for Multiple Autonomous Vehicles Based on CARLA Simulation Framework
    Araujo, Tulio Oliveira
    Netto, Marcio Lobo
    Justo, Joao Francisco
    2023 8TH INTERNATIONAL CONFERENCE ON MODELS AND TECHNOLOGIES FOR INTELLIGENT TRANSPORTATION SYSTEMS, MT-ITS, 2023,
  • [43] Deep Reinforcement Learning-Based Control of Stewart Platform With Parametric Simulation in ROS and Gazebo
    Yadavari, Hadi
    Aghaei, Vahid Tavakol
    Ikizoglu, Serhat
    JOURNAL OF MECHANISMS AND ROBOTICS-TRANSACTIONS OF THE ASME, 2023, 15 (03):
  • [44] Adaptive disassembly sequence planning for VR maintenance training via deep reinforcement learning
    Mao, Haoyang
    Liu, Zhenyu
    Qiu, Chan
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2023, 124 (09): : 3039 - 3048
  • [45] Intelligent Beam Training for Millimeter-Wave Communications via Deep Reinforcement Learning
    Zhang, Jianjun
    Huang, Yongming
    Wang, Jiaheng
    You, Xiaohu
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [46] Adaptive disassembly sequence planning for VR maintenance training via deep reinforcement learning
    Haoyang Mao
    Zhenyu Liu
    Chan Qiu
    The International Journal of Advanced Manufacturing Technology, 2023, 124 : 3039 - 3048
  • [47] Automatic Curriculum Determination for Deep Reinforcement Learning in Reconfigurable Robots
    Karni, Zohar
    Simhon, Or
    Zarrouk, David
    Berman, Sigal
    IEEE ACCESS, 2024, 12 : 78342 - 78353
  • [48] Deep Reinforcement Learning for the Autonomous Adaptive Behavior of Social Robots
    Maroto-Gomez, Marcos
    Malfaz, Maria
    Castro-Gonzalez, Alvaro
    Angel Salichs, Miguel
    SOCIAL ROBOTICS, ICSR 2022, PT I, 2022, 13817 : 208 - 217
  • [49] Mapless navigation based on deep reinforcement learning for mobile robots
    Hu G.-M.
    Cai K.-W.
    Wang F.
    Kang Y.-W.
    Zhang J.-X.
    Jin Z.
    Lin Y.-S.
    Kongzhi yu Juece/Control and Decision, 2024, 39 (03): : 985 - 993
  • [50] Path Following for Autonomous Mobile Robots with Deep Reinforcement Learning
    Cao, Yu
    Ni, Kan
    Kawaguchi, Takahiro
    Hashimoto, Seiji
    SENSORS, 2024, 24 (02)