Policy Iteration for Decentralized Control of Markov Decision Processes

被引:50
|
作者
Bernstein, Daniel S. [1 ]
Amato, Christopher [1 ]
Hansen, Eric A. [2 ]
Zilberstein, Shlomo [3 ]
机构
[1] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
[2] Mississippi State Univ, Dept CS & Engn, Mississippi State, MS 39762 USA
[3] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
基金
美国国家科学基金会;
关键词
D O I
10.1613/jair.2667
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Coordination of distributed agents is required for problems arising in many areas, including multi-robot systems, networking and e-commerce. As a formal framework for such problems, we use the decentralized partially observable Markov decision process (DEC-POMDP). Though much work has been done on optimal dynamic programming algorithms for the single-agent version of the problem, optimal algorithms for the multiagent case have been elusive. The main contribution of this paper is an optimal policy iteration algorithm for solving DEC-POMDPs. The algorithm uses stochastic finite-state controllers to represent policies. The solution can include a correlation device, which allows agents to correlate their actions without communicating. This approach alternates between expanding the controller and performing value-preserving transformations, which modify the controller without sacrificing value. We present two efficient value-preserving transformations: one can reduce the size of the controller and the other can improve its value while keeping the size fixed. Empirical results demonstrate the usefulness of value-preserving transformations in increasing value while keeping controller size to a minimum. To broaden the applicability of the approach, we also present a heuristic version of the policy iteration algorithm, which sacrifices convergence to optimality. This algorithm further reduces the size of the controllers at each step by assuming that probability distributions over the other agents' actions are known. While this assumption may not hold in general, it helps produce higher quality solutions in our test problems.
引用
收藏
页码:89 / 132
页数:44
相关论文
共 50 条
  • [41] Fuzzy Reinforcement Learning Control for Decentralized Partially Observable Markov Decision Processes
    Sharma, Rajneesh
    Spaan, Matthijs T. J.
    IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ 2011), 2011, : 1422 - 1429
  • [42] Topological Value Iteration Algorithm for Markov Decision Processes
    Dai, Peng
    Goldsmith, Judy
    20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 1860 - 1865
  • [43] New prioritized value iteration for Markov decision processes
    de Guadalupe Garcia-Hernandez, Ma.
    Ruiz-Pinales, Jose
    Onaindia, Eva
    Gabriel Avina-Cervantes, J.
    Ledesma-Orozco, Sergio
    Alvarado-Mendez, Edgar
    Reyes-Ballesteros, Alberto
    ARTIFICIAL INTELLIGENCE REVIEW, 2012, 37 (02) : 157 - 167
  • [44] New prioritized value iteration for Markov decision processes
    Ma. de Guadalupe Garcia-Hernandez
    Jose Ruiz-Pinales
    Eva Onaindia
    J. Gabriel Aviña-Cervantes
    Sergio Ledesma-Orozco
    Edgar Alvarado-Mendez
    Alberto Reyes-Ballesteros
    Artificial Intelligence Review, 2012, 37 : 157 - 167
  • [45] Approximate Policy Iteration for Markov Control Revisited
    Gosavi, Abhijit
    COMPLEX ADAPTIVE SYSTEMS 2012, 2012, 12 : 90 - 95
  • [46] Policy Iteration for Continuous-Time Average Reward Markov Decision Processes in Polish Spaces
    Zhu, Quanxin
    Yang, Xinsong
    Huang, Chuangxia
    ABSTRACT AND APPLIED ANALYSIS, 2009,
  • [47] ON THE CONVERGENCE OF POLICY ITERATION IN FINITE STATE UNDISCOUNTED MARKOV DECISION-PROCESSES - THE UNICHAIN CASE
    HORDIJK, A
    PUTERMAN, ML
    MATHEMATICS OF OPERATIONS RESEARCH, 1987, 12 (01) : 163 - 176
  • [48] Mean Field Approximation of the Policy Iteration Algorithm for Graph-based Markov Decision Processes
    Peyrard, Nathalie
    Sabbadin, Regis
    ECAI 2006, PROCEEDINGS, 2006, 141 : 595 - +
  • [49] Average-Reward Decentralized Markov Decision Processes
    Petrik, Marek
    Zilberstein, Shlomo
    20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007, : 1997 - 2002
  • [50] Solving transition independent decentralized Markov decision processes
    Becker, Raphen
    Zilberstein, Shlomo
    Lesser, Victor
    Goldman, Claudia V.
    Journal of Artificial Intelligence Research, 1600, 22 : 423 - 455