Policy Iteration for Decentralized Control of Markov Decision Processes

被引:50
|
作者
Bernstein, Daniel S. [1 ]
Amato, Christopher [1 ]
Hansen, Eric A. [2 ]
Zilberstein, Shlomo [3 ]
机构
[1] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
[2] Mississippi State Univ, Dept CS & Engn, Mississippi State, MS 39762 USA
[3] Univ Massachusetts, Dept Comp Sci, Amherst, MA 01003 USA
基金
美国国家科学基金会;
关键词
D O I
10.1613/jair.2667
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Coordination of distributed agents is required for problems arising in many areas, including multi-robot systems, networking and e-commerce. As a formal framework for such problems, we use the decentralized partially observable Markov decision process (DEC-POMDP). Though much work has been done on optimal dynamic programming algorithms for the single-agent version of the problem, optimal algorithms for the multiagent case have been elusive. The main contribution of this paper is an optimal policy iteration algorithm for solving DEC-POMDPs. The algorithm uses stochastic finite-state controllers to represent policies. The solution can include a correlation device, which allows agents to correlate their actions without communicating. This approach alternates between expanding the controller and performing value-preserving transformations, which modify the controller without sacrificing value. We present two efficient value-preserving transformations: one can reduce the size of the controller and the other can improve its value while keeping the size fixed. Empirical results demonstrate the usefulness of value-preserving transformations in increasing value while keeping controller size to a minimum. To broaden the applicability of the approach, we also present a heuristic version of the policy iteration algorithm, which sacrifices convergence to optimality. This algorithm further reduces the size of the controllers at each step by assuming that probability distributions over the other agents' actions are known. While this assumption may not hold in general, it helps produce higher quality solutions in our test problems.
引用
收藏
页码:89 / 132
页数:44
相关论文
共 50 条
  • [11] Accelerated modified policy iteration algorithms for Markov decision processes
    Shlakhter, Oleksandr
    Lee, Chi-Guhn
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2013, 78 (01) : 61 - 76
  • [12] Policy Iteration for Parameterized Markov Decision Processes and Its Application
    Xia, Li
    Jia, Qing-Shan
    2013 9TH ASIAN CONTROL CONFERENCE (ASCC), 2013,
  • [13] Accelerated modified policy iteration algorithms for Markov decision processes
    Oleksandr Shlakhter
    Chi-Guhn Lee
    Mathematical Methods of Operations Research, 2013, 78 : 61 - 76
  • [14] The complexity of Policy Iteration is exponential for discounted Markov Decision Processes
    Hollanders, Romain
    Delvenne, Jean-Charles
    Jungers, Raphael M.
    2012 IEEE 51ST ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2012, : 5997 - 6002
  • [15] Decentralized Control of Partially Observable Markov Decision Processes
    Amato, Christopher
    Chowdhary, Girish
    Geramifard, Alborz
    Uere, N. Kemal
    Kochenderfer, Mykel J.
    2013 IEEE 52ND ANNUAL CONFERENCE ON DECISION AND CONTROL (CDC), 2013, : 2398 - 2405
  • [16] Policy iteration type algorithms for recurrent state Markov decision processes
    Patek, SD
    COMPUTERS & OPERATIONS RESEARCH, 2004, 31 (14) : 2333 - 2347
  • [17] Approximate policy iteration with a policy language bias: Solving relational markov decision processes
    Fern, Alan
    Yoon, Sungwook
    Givan, Robert
    Journal of Artificial Intelligence Research, 1600, 25 : 75 - 118
  • [18] Approximate policy iteration with a policy language bias: Solving relational Markov decision processes
    Fern, A
    Yoon, S
    Givan, R
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2006, 25 : 75 - 118
  • [19] COMPUTATIONAL COMPARISON OF POLICY ITERATION ALGORITHMS FOR DISCOUNTED MARKOV DECISION-PROCESSES
    HARTLEY, R
    LAVERCOMBE, AC
    THOMAS, LC
    COMPUTERS & OPERATIONS RESEARCH, 1986, 13 (04) : 411 - 420
  • [20] Approximate Policy Iteration for Markov Decision Processes via Quantitative Adaptive Aggregations
    Abate, Alessandro
    Ceska, Milan
    Kwiatkowska, Marta
    AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS, ATVA 2016, 2016, 9938 : 13 - 31