Reinforcement learning methods based on GPU accelerated industrial control hardware

被引:5
|
作者
Schmidt, Alexander [1 ]
Schellroth, Florian [1 ]
Fischer, Marc [1 ]
Allimant, Lukas [1 ]
Riedel, Oliver [1 ]
机构
[1] Univ Stuttgart, Inst Control Engn Machine Tools & Mfg Units, Stuttgart, Germany
来源
NEURAL COMPUTING & APPLICATIONS | 2021年 / 33卷 / 18期
关键词
Reinforcement learning; PLC; Real-time; Industrial control; Manufacturing; GPU;
D O I
10.1007/s00521-021-05848-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reinforcement learning is a promising approach for manufacturing processes. Process knowledge can be gained automatically, and autonomous tuning of control is possible. However, the use of reinforcement learning in a production environment imposes specific requirements that must be met for a successful application. This article defines those requirements and evaluates three reinforcement learning methods to explore their applicability. The results show that convolutional neural networks are computationally heavy and violate the real-time execution requirements. A new architecture is presented and validated that allows using GPU-based hardware acceleration while meeting the real-time execution requirements.
引用
收藏
页码:12191 / 12207
页数:17
相关论文
共 50 条
  • [1] Reinforcement learning methods based on GPU accelerated industrial control hardware
    Alexander Schmidt
    Florian Schellroth
    Marc Fischer
    Lukas Allimant
    Oliver Riedel
    [J]. Neural Computing and Applications, 2021, 33 : 12191 - 12207
  • [2] Towards Hardware Accelerated Reinforcement Learning for Application-Specific Robotic Control
    Shao, Shengjia
    Tsai, Jason
    Mysior, Michal
    Luk, Wayne
    Chau, Thomas
    Warren, Alexander
    Jeppesen, Ben
    [J]. 2018 IEEE 29TH INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP), 2018, : 135 - 142
  • [3] GUNREAL: GPU-accelerated UNsupervised REinforcement and Auxiliary Learning
    Coppens, Youri
    Shirahata, Koichi
    Fukagai, Takuya
    Tomita, Yasumoto
    Ike, Atsushi
    [J]. 2017 FIFTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING (CANDAR), 2017, : 330 - 336
  • [4] Impulsive Accelerated Reinforcement Learning for H∞ Control
    Wu, Yan
    Luo, Shixian
    Jiang, Yan
    [J]. NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 191 - 203
  • [5] Pgx: Hardware-Accelerated Parallel Game Simulators for Reinforcement Learning
    Koyamada, Sotetsu
    Okano, Shinri
    Nishimori, Soichiro
    Murata, Yu
    Habara, Keigo
    Kita, Haruka
    Ishii, Shin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Intrusion Detection in Industrial Control Systems Based on Deep Reinforcement Learning
    Sangoleye, Fisayo
    Johnson, Jay
    Eleni Tsiropoulou, Eirini
    [J]. IEEE Access, 2024, 12 : 151444 - 151459
  • [7] Assessment of reinforcement learning applications for industrial control based on complexity measures
    Grothoff, Julian
    Camargo Torres, Nicolas
    Kleinert, Tobias
    [J]. AT-AUTOMATISIERUNGSTECHNIK, 2022, 70 (01) : 53 - 66
  • [8] Accelerated Reinforcement Learning for Temporal Logic Control Objectives
    Kantaros, Yiannis
    [J]. 2022 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2022, : 5077 - 5082
  • [9] Reinforcement Learning-Based Impedance Learning for Robot Admittance Control in Industrial Assembly
    Feng, Xiaoxin
    Shi, Tian
    Li, Weibing
    Lu, Peng
    Pan, Yongping
    [J]. 2022 INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2022), 2022, : 1092 - 1097
  • [10] Reinforcement Learning for Online Industrial Process Control
    Govindhasamy, James J.
    McLoone, Sean F.
    Irwin, George W.
    French, John J.
    Doyle, Richard P.
    [J]. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2005, 9 (01) : 23 - 30