Complexity Bounds for Deterministic Partially Observed Markov Decision Processes

被引:0
|
作者
Cyrille Vessaire [1 ]
Pierre Carpentier [2 ]
Jean-Philippe Chancelier [1 ]
Michel De Lara [1 ]
Alejandro Rodríguez-Martínez [3 ]
机构
[1] CERMICS,
[2] École des Ponts ParisTech,undefined
[3] UMA,undefined
[4] ENSTA Paris,undefined
[5] IP Paris,undefined
[6] TotalEnergies SE,undefined
关键词
D O I
10.1007/s10479-024-06282-0
中图分类号
学科分类号
摘要
Partially Observed Markov Decision Processes (Pomdp) share the structure of Markov Decision Processs (Mdp) — with stages, states, actions, probability transitions, rewards — but for the notion of solutions. In a Pomdp, observation mappings provide partial and/or imperfect knowledge of the state, and a policy maps observations (and not states like in a Mdp) towards actions. Theroretically, a Pomdp can be solved by Dynamic Programming (DP), but with an information state made of probability distributions over the original state, hence DP suffers from the curse of dimensionality, even in the finite case. This is why, authors like (Littman, M. L. 1996). Algorithms for Sequential Decision Making. PhD thesis, Brown University) and (Bonet, B. 2009). Deterministic POMDPs revisited. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09 (pp. 59-66). Arlington, Virginia, USA. AUAI Press) have studied the subclass of so-called Deterministic Partially Observed Markov Decision Processes (Det-Pomdp), where transitions and observations mappings are deterministic. In this paper, we improve on Littman’s complexity bounds. We then introduce and study a more restricted class, Separated Det-Pomdps, and give some new complexity bounds for this class.
引用
收藏
页码:345 / 382
页数:37
相关论文
共 50 条