Simplifying Complex Observation Models in Continuous POMDP Planning with Probabilistic Guarantees and Practice

被引:0
|
作者
Lev-Yehudi, Idan [1 ]
Barenboim, Moran [1 ]
Indelman, Vadim [2 ]
机构
[1] Technion Israel Inst Technol, TASP, IL-32000 Haifa, Israel
[2] Technion Israel Inst Technol, Dept Aerosp Engn, IL-32000 Haifa, Israel
基金
以色列科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Solving partially observable Markov decision processes (POMDPs) with high dimensional and continuous observations, such as camera images, is required for many real life robotics and planning problems. Recent researches suggested machine learned probabilistic models as observation models, but their use is currently too computationally expensive for online deployment. We deal with the question of what would be the implication of using simplified observation models for planning, while retaining formal guarantees on the quality of the solution. Our main contribution is a novel probabilistic bound based on a statistical total variation distance of the simplified model. We show that it bounds the theoretical POMDP value w.r.t. original model, from the empirical planned value with the simplified model, by generalizing recent results of particle-belief MDP concentration bounds. Our calculations can be separated into offline and online parts, and we arrive at formal guarantees without having to access the costly model at all during planning, which is also a novel result. Finally, we demonstrate in simulation how to integrate the bound into the routine of an existing continuous online POMDP solver.
引用
收藏
页码:20176 / 20184
页数:9
相关论文
empty
未找到相关数据