Robotic Control of the Deformation of Soft Linear Objects Using Deep Reinforcement Learning

被引:6
|
作者
Zakaria, Melodie Hani Daniel [1 ]
Aranda, Miguel [2 ]
Lequievre, Laurent [1 ]
Lengagne, Sebastien [1 ]
Corrales Ramon, Juan Antonio [3 ]
Mezouar, Youcef [1 ]
机构
[1] Univ Clermont Auvergne, CNRS, Clermont Auvergne INP, Inst Pascal, Clermont Ferrand, France
[2] Univ Zaragoza, Inst Invest Ingn Aragon, Zaragoza, Spain
[3] Univ Santiago de Compostela, Ctr Singular Invest Tecnoloxias Intelixentes CiTI, Santiago De Compostela, Spain
关键词
MANIPULATION;
D O I
10.1109/CASE49997.2022.9926667
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a new control framework for manipulating soft objects. A Deep Reinforcement Learning (DRL) approach is used to make the shape of a deformable object reach a set of desired points by controlling a robotic arm which manipulates it. Our framework is more easily generalizable than existing ones: it can work directly with different initial and desired final shapes without need for relearning. We achieve this by using learning parallelization, i.e., executing multiple agents in parallel on various environment instances. We focus our study on deformable linear objects. These objects are interesting in industrial and agricultural domains, yet their manipulation with robots, especially in 3D workspaces, remains challenging. We simulate the entire environment, i.e., the soft object and the robot, for the training and the testing using PyBullet and OpenAI Gym. We use a combination of state-of-the-art DRL techniques, the main ingredient being a training approach for the learning agent (i.e., the robot) based on Deep Deterministic Policy Gradient (DDPG). Our simulation results support the usefulness and enhanced generality of the proposed approach.
引用
收藏
页码:1516 / 1522
页数:7
相关论文
共 50 条
  • [41] Hardware-in-the-Loop Soft Robotic Testing Framework Using an Actor-Critic Deep Reinforcement Learning Algorithm
    Marquez, Jesus
    Sullivan, Charles
    Price, Ryan M.
    Roberts, Robert C.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 6076 - 6082
  • [42] Quadrotor motion control using deep reinforcement learning
    Jiang, Zifei
    Lynch, Alan F.
    [J]. JOURNAL OF UNMANNED VEHICLE SYSTEMS, 2021, 9 (04) : 234 - 251
  • [43] CNC machine control using deep reinforcement learning
    Kalandyk, Dawid
    Kwiatkowski, Bogdan
    Mazur, Damian
    [J]. BULLETIN OF THE POLISH ACADEMY OF SCIENCES-TECHNICAL SCIENCES, 2024, 72 (03)
  • [44] Dynamic metasurface control using Deep Reinforcement Learning
    Zhao, Ying
    Li, Liang
    Lanteri, Stephane
    Viquerat, Jonathan
    [J]. MATHEMATICS AND COMPUTERS IN SIMULATION, 2022, 197 : 377 - 395
  • [45] Adaptive Actuation of Magnetic Soft Robots Using Deep Reinforcement Learning
    Yao, Jianpeng
    Cao, Quanliang
    Ju, Yuwei
    Sun, Yuxuan
    Liu, Ruiqi
    Han, Xiaotao
    Li, Liang
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2023, 5 (02)
  • [46] Shape Control of Deformable Linear Objects with Offline and Online Learning of Local Linear Deformation Models
    Yu, Mingrui
    Zhong, Hanzhong
    Li, Xiang
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022,
  • [47] A novel robotic grasping method for moving objects based on multi-agent deep reinforcement learning
    Huang, Yu
    Liu, Daxin
    Liu, Zhenyu
    Wang, Ke
    Wang, Qide
    Tan, Jianrong
    [J]. ROBOTICS AND COMPUTER-INTEGRATED MANUFACTURING, 2024, 86
  • [48] Robotic assembly control reconfiguration based on transfer reinforcement learning for objects with different geometric features
    Gai, Yuhang
    Wang, Bing
    Zhang, Jiwen
    Wu, Dan
    Chen, Ken
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 129
  • [49] The Use of Reinforcement Learning in the Task of Moving Objects with the Robotic Arm
    Aitygulov, Ermek E.
    [J]. ARTIFICIAL INTELLIGENCE, 2019, 11866 : 119 - 126
  • [50] Deep Reinforcement Learning-Based Accurate Control of Planetary Soft Landing
    Xu, Xibao
    Chen, Yushen
    Bai, Chengchao
    [J]. SENSORS, 2021, 21 (23)