Simulated and Real Robotic Reach, Grasp, and Pick-and-Place Using Combined Reinforcement Learning and Traditional Controls

被引:10
|
作者
Lobbezoo, Andrew [1 ]
Kwon, Hyock-Ju [1 ]
机构
[1] Univ Waterloo, Dept Mech & Mechatron Engn, AI Mfg Lab, Waterloo, ON N2L 3G1, Canada
关键词
reinforcement learning; proximal policy optimization; soft actor-critic; simulation environment; robot operating system; robotic control; Franka Panda robot; pick-and-place; real-world robotics;
D O I
10.3390/robotics12010012
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
The majority of robots in factories today are operated with conventional control strategies that require individual programming on a task-by-task basis, with no margin for error. As an alternative to the rudimentary operation planning and task-programming techniques, machine learning has shown significant promise for higher-level task planning, with the development of reinforcement learning (RL)-based control strategies. This paper reviews the implementation of combined traditional and RL control for simulated and real environments to validate the RL approach for standard industrial tasks such as reach, grasp, and pick-and-place. The goal of this research is to bring intelligence to robotic control so that robotic operations can be completed without precisely defining the environment, constraints, and the action plan. The results from this approach provide optimistic preliminary data on the application of RL to real-world robotics.
引用
收藏
页数:19
相关论文
共 33 条
  • [21] A Deep Reinforcement Learning-based Application Framework for Conveyor Belt-based Pick-and-Place Systems using 6-axis Manipulators under Uncertainty and Real-time Constraints
    Le, Tuyen P.
    Lee, DongHyun
    Choi, DaeWoo
    2021 18TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), 2021, : 464 - 470
  • [22] Pick and Place Operations in Logistics Using a Mobile Manipulator Controlled with Deep Reinforcement Learning
    Iriondo, Ander
    Lazkano, Elena
    Susperregi, Loreto
    Urain, Julen
    Fernandez, Ane
    Molina, Jorge
    APPLIED SCIENCES-BASEL, 2019, 9 (02):
  • [23] Towards Hierarchical Task Decomposition using Deep Reinforcement Learning for Pick and Place Subtasks
    Marzari, Luca
    Pore, Ameya
    Dall'Alba, Diego
    Aragon-Camarasa, Gerardo
    Farinelli, Alessandro
    Fiorini, Paolo
    2021 20TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2021, : 640 - 645
  • [24] Augmenting Vision-Based Grasp Plans for Soft Robotic Grippers using Reinforcement Learning
    Vatsal, Vighnesh
    George, Nijil
    2022 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2022, : 1904 - 1909
  • [25] Model-Free Dynamic Control of a 3-DoF Delta Parallel Robot for Pick-and-Place Application based on Deep Reinforcement Learning
    Jalali, Hasan
    Samadi, Saba
    Kalhor, Ahmad
    Masouleh, Mehdi Tale
    2022 10TH RSI INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM), 2022, : 48 - 54
  • [26] Sim-to-Real Robotic Sketching using Behavior Cloning and Reinforcement Learning
    Jia, Biao (biao@umd.edu), 1600, Institute of Electrical and Electronics Engineers Inc.
  • [27] Collective Behavior Acquisition of Real Robotic Swarms using Deep Reinforcement Learning
    Yasuda, Toshiyuki
    Ohkura, Kazuhiro
    2018 SECOND IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2018, : 179 - 180
  • [28] Towards Pick and Place Multi Robot Coordination Using Multi-agent Deep Reinforcement Learning
    Lan, Xi
    Qiao, Yuansong
    Lee, Brian
    2021 7TH INTERNATIONAL CONFERENCE ON AUTOMATION, ROBOTICS AND APPLICATIONS (ICARA 2021), 2021, : 85 - 89
  • [29] Grasp Pose Estimation for Pick and Place of Frozen Blood Bags Based on Point Cloud Processing and Deep Learning Strategies Using Vacuum Grippers
    Bashir M.Z.
    Kim J.
    Nocentini O.
    Cavallo F.
    SN Computer Science, 4 (5)
  • [30] Real-time Motion Planning for Robotic Teleoperation Using Dynamic-goal Deep Reinforcement Learning
    Kamali, Kaveh
    Bonev, Ilian A.
    Desrosiers, Christian
    2020 17TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV 2020), 2020, : 182 - 189