Reinforcement learning for long-run average cost

被引:72
|
作者
Gosavi, A [1 ]
机构
[1] SUNY Buffalo, Buffalo, NY 14260 USA
关键词
stochastic processes; reinforcement learning; two time scales;
D O I
10.1016/S0377-2217(02)00874-3
中图分类号
C93 [管理学];
学科分类号
12 ; 1201 ; 1202 ; 120202 ;
摘要
A large class of sequential decision-making problems under uncertainty can be modeled as Markov and semi-Markov decision problems (SMDPs), when their underlying probability structure has a Markov chain. They may be solved by using classical dynamic programming (DP) methods. However, DP methods suffer from the curse of dimensionality and break down rapidly in face of large state-spaces. In addition, DP methods require the exact computation of the so-called transition probabilities, which are often hard to obtain and are hence said to suffer from the curse of modeling as well. In recent years, a simulation-based method, called reinforcement learning (RL), has emerged in the literature. It can, to a great extent, alleviate stochastic DP of its curses by generating 'near-optimal' solutions to problems having large state-spaces and complex transition mechanisms. In this paper, a simulation-based algorithm that solves Markov and SMDPs is presented, along with its convergence analysis. The algorithm involves a step-size based transformation on two-time scales. Its convergence analysis is based on a recent result on asynchronous convergence of iterates on two-time scales. We present numerical results from the new algorithm on a classical preventive maintenance case study of a reasonable size, where results on the optimal policy are also available. In addition, we present a tutorial that explains the framework of RL in the context of long-run average cost SMDPs. (C) 2003 Elsevier B.V. All rights reserved.
引用
收藏
页码:654 / 674
页数:21
相关论文
共 50 条
  • [31] Hierarchical production control in a stochastic N-machine flowshop with long-run average cost
    Sethi, SP
    Zhang, HQ
    Zhang, Q
    [J]. JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, 2000, 251 (01) : 285 - 309
  • [32] EX-ANTE AND EX-POST LONG-RUN AVERAGE COST-FUNCTIONS
    HUBBARD, LJ
    DAWSON, PJ
    [J]. APPLIED ECONOMICS, 1987, 19 (10) : 1411 - 1419
  • [33] Long-run Cost Functions for Electricity Transmission
    Rosellon, Juan
    Vogelsang, Ingo
    Weigt, Hannes
    [J]. ENERGY JOURNAL, 2012, 33 (01): : 131 - 160
  • [34] Estimating the long-run user cost elasticity
    Schaller, Huntley
    [J]. JOURNAL OF MONETARY ECONOMICS, 2006, 53 (04) : 725 - 736
  • [35] LONG-RUN POLICY ANALYSIS AND LONG-RUN GROWTH
    REBELO, S
    [J]. JOURNAL OF POLITICAL ECONOMY, 1991, 99 (03) : 500 - 521
  • [36] Markov decision processes with multiple long-run average objectives
    Chatterjee, Krishnendu
    [J]. FSTTCS 2007: FOUNDATIONS OF SOFTWARE TECHNOLOGY AND THEORETICAL COMPUTER SCIENCE, PROCEEDINGS, 2007, 4855 : 473 - 484
  • [37] Optimal control of pollution accumulation with long-run average welfare
    Kawaguchi, K
    [J]. ENVIRONMENTAL & RESOURCE ECONOMICS, 2003, 26 (03): : 457 - 468
  • [38] Optimal Control of Pollution Accumulation with Long-Run Average Welfare
    Kazuhito Kawaguchi
    [J]. Environmental and Resource Economics, 2003, 26 : 457 - 468
  • [39] Empty container management in a port with long-run average criterion
    Li, JA
    Liu, K
    Leung, SCH
    Lai, KK
    [J]. MATHEMATICAL AND COMPUTER MODELLING, 2004, 40 (1-2) : 85 - 100
  • [40] MARKOV DECISION PROCESSES WITH MULTIPLE LONG-RUN AVERAGE OBJECTIVES
    Brazdil, Tomas
    Brozek, Vaclav
    Chatterjee, Krishnendu
    Forejt, Vojtech
    Kucera, Antonin
    [J]. LOGICAL METHODS IN COMPUTER SCIENCE, 2014, 10 (01)