State Identification Via Symbolic Time Series Analysis for Reinforcement Learning Control

被引:0
|
作者
Bhattacharya, Chandrachur [1 ]
Ray, Asok [2 ,3 ]
机构
[1] Penn State Univ, Dept Mech Engn & Elect Engn, University Pk, PA 16802 USA
[2] Penn State Univ, Dept Mech Engn, University Pk, PA 16802 USA
[3] Penn State Univ, Dept Math, University Pk, PA 16802 USA
来源
JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME | 2024年 / 146卷 / 05期
基金
美国国家科学基金会;
关键词
regime identification; reinforcement learning control; probabilistic finite state automata; Lorenz system; AUTOMATA;
D O I
10.1115/1.4065501
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This technical brief makes use of the concept of symbolic time-series analysis (STSA) for identifying discrete states from the nonlinear time response of a chaotic dynamical system for model-free reinforcement learning (RL) control. Along this line, a projection-based method is adopted to construct probabilistic finite state automata (PFSA) for identification of the current state (i.e., operational regime) of the Lorenz system; and a simple Q-map-based (and model-free) RL control strategy is formulated to reach the target state from the (identified) current state. A synergistic combination of PFSA-based state identification and RL control is demonstrated by the simulation of a numeric model of the Lorenz system, which yields very satisfactory performance to reach the target states from the current states in real-time.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] On the control of chaotic systems via symbolic time series analysis
    Piccardi, C
    CHAOS, 2004, 14 (04) : 1026 - 1034
  • [2] Pattern identification in dynamical systems via symbolic time series analysis
    Rajagopalan, Venkatesh
    Ray, Asok
    Samsi, Rohan
    Mayer, Jeffrey
    PATTERN RECOGNITION, 2007, 40 (11) : 2897 - 2907
  • [3] Identification of statistical patterns in complex systems via symbolic time series analysis
    Gupta, Shalabh
    Khatkhate, Amol
    Ray, Asok
    Keller, Eric
    ISA TRANSACTIONS, 2006, 45 (04) : 477 - 490
  • [4] Optimal Control via Reinforcement Learning with Symbolic Policy Approximation
    Kubalik, Jiri
    Alibekov, Eduard
    Babuska, Robert
    IFAC PAPERSONLINE, 2017, 50 (01): : 4162 - 4167
  • [5] Transfer Learning for Detection of Combustion Instability Via Symbolic Time-Series Analysis
    Bhattacharya, Chandrachur
    Ray, Asok
    JOURNAL OF DYNAMIC SYSTEMS MEASUREMENT AND CONTROL-TRANSACTIONS OF THE ASME, 2021, 143 (10):
  • [6] Symbolic equation solving via reinforcement learning
    Dabelow, Lennart
    Ueda, Masahito
    NEUROCOMPUTING, 2025, 613
  • [7] Online identification and control of PDEs via reinforcement learning methods
    Alla, Alessandro
    Pacifico, Agnese
    Palladino, Michele
    Pesare, Andrea
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2024, 50 (04)
  • [8] Symbolic time series analysis via wavelet-based partitioning
    Rajagopalan, Venkatesh
    Ray, Asok
    SIGNAL PROCESSING, 2006, 86 (11) : 3309 - 3320
  • [9] Online detection of fatigue failure via symbolic time series analysis
    Gupta, S
    Ray, A
    Keller, E
    ACC: PROCEEDINGS OF THE 2005 AMERICAN CONTROL CONFERENCE, VOLS 1-7, 2005, : 3309 - 3314
  • [10] Characterizing the dynamics of coupled pendulums via symbolic time series analysis
    De Polsi, G.
    Cabeza, C.
    Marti, A. C.
    Masoller, C.
    EUROPEAN PHYSICAL JOURNAL-SPECIAL TOPICS, 2013, 222 (02): : 501 - 510