Finite horizon continuous-time Markov decision processes with mean and variance criteria

被引:4
|
作者
Huang, Yonghui [1 ]
机构
[1] Sun Yat Sen Univ, Sch Math, Guangzhou 510275, Guangdong, Peoples R China
关键词
Markov decision processes; Continuous time; Finite horizon optimality; HJB equation; Optimal policy; HISTORY-DEPENDENT POLICIES; UNBOUNDED RATES; SEMI-MARKOV; OPTIMALITY; OPTIMIZATION; MINIMIZATION; MODELS;
D O I
10.1007/s10626-018-0273-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper studies mean maximization and variance minimization problems in finite horizon continuous-time Markov decision processes. The state and action spaces are assumed to be Borel spaces, while reward functions and transition rates are allowed to be unbounded. For the mean problem, we design a method called successive approximation, which enables us to prove the existence of a solution to the Hamilton-Jacobi-Bellman (HJB) equation, and then the existence of a mean-optimal policy under some growth and compact-continuity conditions. For the variance problem, using the first-jump analysis, we succeed in converting the second moment of the finite horizon reward to a mean of a finite horizon reward with new reward functions under suitable conditions, based on which the associated HJB equation for the variance problem and the existence of variance-optimal policies are established. Value iteration algorithms for computing mean- and variance-optimal policies are proposed.
引用
收藏
页码:539 / 564
页数:26
相关论文
共 50 条