Optimal Control of Probability on a Target Set for Continuous-Time Markov Chains

被引:1
|
作者
Ma, Chenglin [1 ]
Zhao, Huaizhong [2 ,3 ]
机构
[1] Shandong Univ, Sch Math, Jinan 250100, Peoples R China
[2] Univ Durham, Dept Math Sci, Durham DH1 3LE, England
[3] Shandong Univ, Res Ctr Math & Interdisciplinary Sci, Qingdao 266237, Peoples R China
关键词
Markov processes; Optimal control; Dynamic programming; Process control; Games; Aerospace electronics; Safety; Controlled Markov chains; dynamic programming principle (DPP); Hamilton-Jacobi-Bellman (HJB) equation; optimal controls; risk probability criteria; DECISION-PROCESSES; RISK PROBABILITY; CRITERION;
D O I
10.1109/TAC.2023.3278789
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, a stochastic optimal control problem is considered for a continuous-time Markov chain taking values in a denumerable state space over a fixed finite horizon. The optimality criterion is the probability that the process remains in a target set before and at a certain time. The optimal value is a superadditive capacity of target sets. Under some minor assumptions for the controlled Markov process, we establish the dynamic programming principle, based on which we prove that the value function is a classical solution of the Hamilton-Jacobi-Bellman (HJB) equation on a discrete lattice space. We then prove that there exists an optimal deterministic Markov control under the compactness assumption of control domain. We further prove that the value function is the unique solution of the HJB equation. We also consider the case starting from the outside of the target set and give the corresponding results. Finally, we apply our results to two examples.
引用
收藏
页码:1202 / 1209
页数:8
相关论文
共 50 条