Stochastic Policy Gradient Ascent in Reproducing Kernel Hilbert Spaces

被引:12
|
作者
Paternain, Santiago [1 ]
Bazerque, Juan Andres [2 ]
Small, Austin [1 ]
Ribeiro, Alejandro [1 ]
机构
[1] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
[2] Univ Republica, Fac Ingn, Dept Ingn Elect, Montevideo 11800, Barrio, Uruguay
关键词
Stochastic processes; Kernel; Convergence; Complexity theory; Trajectory; Hilbert space; Standards; Autonomous systems; gradient methods; markov processes; unsepervised Learning;
D O I
10.1109/TAC.2020.3029317
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning consists of finding policies that maximize an expected cumulative long-term reward in a Markov decision process with unknown transition probabilities and instantaneous rewards. In this article, we consider the problem of finding such optimal policies while assuming they are continuous functions belonging to a reproducing kernel Hilbert space (RKHS). To learn the optimal policy, we introduce a stochastic policy gradient ascent algorithm with the following three unique novel features. First, the stochastic estimates of policy gradients are unbiased. Second, the variance of stochastic gradients is reduced by drawing on ideas from numerical differentiation. Four, policy complexity is controlled using sparse RKHS representations. Novel feature, first, is instrumental in proving convergence to a stationary point of the expected cumulative reward. Novel feature, second, facilitates reasonable convergence times. Novel feature, third, is a necessity in practical implementations, which we show can be done in a way that does not eliminate convergence guarantees. Numerical examples in standard problems illustrate successful learning of policies with low complexity representations, which are close to stationary points of the expected cumulative reward.
引用
收藏
页码:3429 / 3444
页数:16
相关论文
共 50 条
  • [1] A STOCHASTIC BEHAVIOR ANALYSIS OF STOCHASTIC RESTRICTED-GRADIENT DESCENT ALGORITHM IN REPRODUCING KERNEL HILBERT SPACES
    Takizawa, Masa-aki
    Yukawa, Masahiro
    Richard, Cdric
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 2001 - 2005
  • [2] Functional Gradient Motion Planning in Reproducing Kernel Hilbert Spaces
    Marinho, Zita
    Boots, Byron
    Dragan, Anca
    Byravan, Arunkumar
    Gordon, Geoffrey J.
    Srinivasa, Siddhartha
    [J]. ROBOTICS: SCIENCE AND SYSTEMS XII, 2016,
  • [3] Stochastic processes with sample paths in reproducing kernel Hilbert spaces
    Lukic, MN
    Beder, JH
    [J]. TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY, 2001, 353 (10) : 3945 - 3969
  • [4] Pasting Reproducing Kernel Hilbert Spaces
    Sawano, Yoshihiro
    [J]. NEW TRENDS IN ANALYSIS AND INTERDISCIPLINARY APPLICATIONS, 2017, : 401 - 407
  • [5] A Primer on Reproducing Kernel Hilbert Spaces
    Manton, Jonathan H.
    Amblard, Pierre-Olivier
    [J]. FOUNDATIONS AND TRENDS IN SIGNAL PROCESSING, 2014, 8 (1-2): : 1 - 126
  • [6] On isomorphism of reproducing kernel Hilbert spaces
    V. V. Napalkov
    V. V. Napalkov
    [J]. Doklady Mathematics, 2017, 95 : 270 - 272
  • [7] Noncommutative reproducing kernel Hilbert spaces
    Ball, Joseph A.
    Marx, Gregory
    Vinnikov, Victor
    [J]. JOURNAL OF FUNCTIONAL ANALYSIS, 2016, 271 (07) : 1844 - 1920
  • [8] RELATIVE REPRODUCING KERNEL HILBERT SPACES
    Alpay, Daniel
    Jorgensen, Palle
    Volok, Dan
    [J]. PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY, 2014, 142 (11) : 3889 - 3895
  • [9] On reproducing kernel Hilbert spaces of polynomials
    Li, XJ
    [J]. MATHEMATISCHE NACHRICHTEN, 1997, 185 : 115 - 148
  • [10] On isomorphism of reproducing kernel Hilbert spaces
    Napalkov, V. V.
    Napalkov, V. V., Jr.
    [J]. DOKLADY MATHEMATICS, 2017, 95 (03) : 270 - 272