Convergence rates of Markov chain approximation methods for controlled diffusions with stopping

被引:2
|
作者
Song, Qingshuo [1 ]
Yin, Gang George [2 ]
机构
[1] City Univ Hong Kong, Dept Math, Kowloon Tong, Hong Kong, Peoples R China
[2] Wayne State Univ, Dept Math, Detroit, MI 48202 USA
基金
美国国家科学基金会;
关键词
Controlled diffusion; dynamic programming equation; Markov chain approximation; rate of convergence; FINITE-DIFFERENCE APPROXIMATIONS; REGIME-SWITCHING DIFFUSIONS; NUMERICAL-METHODS; BELLMAN EQUATIONS;
D O I
10.1007/s11424-010-0148-5
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
This work is concerned with rates of convergence of numerical methods using Markov chain approximation for controlled diffusions with stopping (the first exit time from a bounded region). In lieu of considering the associated finite difference schemes for Hamilton-Jacobi-Bellman (HJB) equations, a purely probabilistic approach is used. There is an added difficulty due to the boundary condition, which requires the continuity of the first exit time with respect to the discrete parameter. To prove the convergence of the algorithm by Markov chain approximation method, a tangency problem might arise. A common approach uses certain conditions to avoid the tangency problem. Here, by modifying the value function, it is demonstrated that the tangency problem will not arise in the sense of convergence in probability and in L (1). In addition, controlled diffusions with a discount factor is also treated.
引用
收藏
页码:600 / 621
页数:22
相关论文
共 50 条