Adaptive Weight Tuning of EWMA Controller via Model-Free Deep Reinforcement Learning

被引:5
|
作者
Ma, Zhu [1 ,2 ]
Pan, Tianhong [2 ]
机构
[1] Anhui Univ, Sch Comp Sci & Technol, Hefei 230601, Peoples R China
[2] Anhui Univ, Sch Elect Engn & Automat, Hefei 230601, Peoples R China
基金
中国国家自然科学基金;
关键词
Run-to-run control; adaptive control; exponen-tially weighted moving average; deep reinforcement learning; semiconductor manufacturing process; TO-RUN CONTROL;
D O I
10.1109/TSM.2022.3225480
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Exponentially weighted moving average (EWMA) controllers have been extensively studied for run-to-run (RtR) control in semiconductor manufacturing processes. However, the EWMA controller with a fixed weight struggles to achieve excellent performance under unknown stochastic disturbances. To improve the performance of EMWA via online parameter tuning, an intelligent strategy using deep reinforcement learning (DRL) technique is developed in this work. To begin with, the weight adjusting problem is established as a Markov decision process. Meanwhile, simple state space, action space and reward function are designed. Then, the classical deep deterministic policy gradient (DDPG) algorithm is utilized to adjust the weight online. Moreover, a quantile regression-based DDPG (QR-DDPG) algorithm is further verified the effectiveness of the proposed method. Finally, the developed scheme is implemented on a deep reactive ion etching process. Comparisons are conducted to show the superiority of the presented approach in terms of disturbance rejection and target tracking.
引用
收藏
页码:91 / 99
页数:9
相关论文
共 50 条
  • [1] Curious Meta-Controller: Adaptive Alternation between Model-Based and Model-Free Control in Deep Reinforcement Learning
    Hafez, Muhammad Burhan
    Weber, Cornelius
    Kerzel, Matthias
    Wermter, Stefan
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [2] An adaptive clustering method for model-free reinforcement learning
    Matt, A
    Regensburger, G
    [J]. INMIC 2004: 8TH INTERNATIONAL MULTITOPIC CONFERENCE, PROCEEDINGS, 2004, : 362 - 367
  • [3] An Adaptive Model-Free Control Method for Metro Train Based on Deep Reinforcement Learning
    Lai, Wenzhu
    Chen, Dewang
    Huang, Yunhu
    Huang, Benzun
    [J]. ADVANCES IN NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY, ICNC-FSKD 2022, 2023, 153 : 263 - 273
  • [4] Model-Free μ Synthesis via Adversarial Reinforcement Learning
    Keivan, Darioush
    Havens, Aaron
    Seiler, Peter
    Dullerud, Geir
    Hu, Bin
    [J]. 2022 AMERICAN CONTROL CONFERENCE, ACC, 2022, : 3335 - 3341
  • [5] Formal Controller Synthesis for Continuous-Space MDPs via Model-Free Reinforcement Learning
    Lavaei, Abolfazl
    Somenzi, Fabio
    Soudjani, Sadegh
    Trivedi, Ashutosh
    Zamani, Majid
    [J]. 2020 ACM/IEEE 11TH INTERNATIONAL CONFERENCE ON CYBER-PHYSICAL SYSTEMS (ICCPS 2020), 2020, : 98 - 107
  • [6] A UNIVERSAL TRAJECTORY TRACKING CONTROLLER FOR MOBILE ROBOTS VIA MODEL-FREE ONLINE REINFORCEMENT LEARNING
    Fahimi, Farbod
    Praneeth, Susheel
    [J]. CONTROL AND INTELLIGENT SYSTEMS, 2015, 43 (01) : 56 - 64
  • [7] Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning
    Nagabandi, Anusha
    Kahn, Gregory
    Fearing, Ronald S.
    Levine, Sergey
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 7579 - 7586
  • [8] Model-Free Deep Inverse Reinforcement Learning by Logistic Regression
    Eiji Uchibe
    [J]. Neural Processing Letters, 2018, 47 : 891 - 905
  • [9] Model-free Deep Reinforcement Learning for Urban Autonomous Driving
    Chen, Jianyu
    Yuan, Bodi
    Tomizuka, Masayoshi
    [J]. 2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2765 - 2771