Simulated and Real Robotic Reach, Grasp, and Pick-and-Place Using Combined Reinforcement Learning and Traditional Controls

被引:10
|
作者
Lobbezoo, Andrew [1 ]
Kwon, Hyock-Ju [1 ]
机构
[1] Univ Waterloo, Dept Mech & Mechatron Engn, AI Mfg Lab, Waterloo, ON N2L 3G1, Canada
关键词
reinforcement learning; proximal policy optimization; soft actor-critic; simulation environment; robot operating system; robotic control; Franka Panda robot; pick-and-place; real-world robotics;
D O I
10.3390/robotics12010012
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The majority of robots in factories today are operated with conventional control strategies that require individual programming on a task-by-task basis, with no margin for error. As an alternative to the rudimentary operation planning and task-programming techniques, machine learning has shown significant promise for higher-level task planning, with the development of reinforcement learning (RL)-based control strategies. This paper reviews the implementation of combined traditional and RL control for simulated and real environments to validate the RL approach for standard industrial tasks such as reach, grasp, and pick-and-place. The goal of this research is to bring intelligence to robotic control so that robotic operations can be completed without precisely defining the environment, constraints, and the action plan. The results from this approach provide optimistic preliminary data on the application of RL to real-world robotics.
引用
收藏
页数:19
相关论文
共 33 条
  • [1] Deep Reinforcement Learning Applied to a Robotic Pick-and-Place Application
    Gomes, Natanael Magno
    Martins, Felipe N.
    Lima, Jose
    Wortche, Heinrich
    OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, OL2A 2021, 2021, 1488 : 251 - 265
  • [2] Sim-to-Real Deep Reinforcement Learning with Manipulators for Pick-and-Place
    Liu, Wenxing
    Niu, Hanlin
    Skilton, Robert
    Carrasco, Joaquin
    TOWARDS AUTONOMOUS ROBOTIC SYSTEMS, TAROS 2023, 2023, 14136 : 240 - 252
  • [3] Prehensile and Non-Prehensile Robotic Pick-and-Place of Objects in Clutter Using Deep Reinforcement Learning
    Imtiaz, Muhammad Babar
    Qiao, Yuansong
    Lee, Brian
    SENSORS, 2023, 23 (03)
  • [4] Reinforcement Learning for Collaborative Robots Pick-and-Place Applications: A Case Study
    Gomes, Natanael Magno
    Martins, Felipe Nascimento
    Lima, Jose
    Wortche, Heinrich
    AUTOMATION, 2022, 3 (01): : 223 - 241
  • [5] PolyDexFrame: Deep Reinforcement Learning-Based Pick-and-Place of Objects in Clutter
    Imtiaz, Muhammad Babar
    Qiao, Yuansong
    Lee, Brian
    MACHINES, 2024, 12 (08)
  • [6] Process sequencing for a pick-and-place robot in a real-life flexible robotic cell
    Nejad, Mazyar Ghadiri
    Shavarani, Seyed Mahdi
    Guden, Huseyin
    Barenji, Reza Vatankhah
    INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2019, 103 (9-12): : 3613 - 3627
  • [7] Process sequencing for a pick-and-place robot in a real-life flexible robotic cell
    Mazyar Ghadiri Nejad
    Seyed Mahdi Shavarani
    Hüseyin Güden
    Reza Vatankhah Barenji
    The International Journal of Advanced Manufacturing Technology, 2019, 103 : 3613 - 3627
  • [8] A simulated annealing heuristic for robotics assembly using the dynamic pick-and-place model
    Su, CT
    Fu, HP
    PRODUCTION PLANNING & CONTROL, 1998, 9 (08) : 795 - 802
  • [9] Implementing Robotic Pick and Place with Non-visual Sensing Using Reinforcement Learning
    Imtiaz, Muhammad Babar
    Qiao, Yuansong
    Lee, Brian
    2022 6TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION (ICRCA 2022), 2022, : 23 - 28
  • [10] OPTIMIZING A MANUFACTURING PICK-AND-PLACE OPERATION ON A ROBOTIC ARM USING A DIGITAL TWIN
    Perry, LaShaundra
    Guerra-Zubiaga, David A.
    Richards, Gershom
    Abidoye, Cecil
    Hantouli, Fadi
    PROCEEDINGS OF ASME 2023 INTERNATIONAL MECHANICAL ENGINEERING CONGRESS AND EXPOSITION, IMECE2023, VOL 3, 2023,