Lower Bounds for Learning in Revealing POMDPs

被引:0
|
作者
Chen, Fan [1 ]
Wang, Huan [2 ]
Xiong, Caiming [2 ]
Mei, Song [3 ]
Bai, Yu [2 ]
机构
[1] Peking Univ, Beijing, Peoples R China
[2] Salesforce AI Res, San Francisco, CA 94105 USA
[3] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper studies the fundamental limits of reinforcement learning (RL) in the challenging partially observable setting. While it is well-established that learning in Partially Observable Markov Decision Processes (POMDPs) requires exponentially many samples in the worst case, a surge of recent work shows that polynomial sample complexities are achievable under the revealing condition-A natural condition that requires the observables to reveal some information about the unobserved latent states. However, the fundamental limits for learning in revealing POMDPs are much less understood, with existing lower bounds being rather preliminary and having substantial gaps from the current best upper bounds. We establish strong PAC and regret lower bounds for learning in revealing POMDPs. Our lower bounds scale polynomially in all relevant problem parameters in a multiplicative fashion, and achieve significantly smaller gaps against the current best upper bounds, providing a solid starting point for future studies. In particular, for multi-step revealing POMDPs, we show that (1) the latent state-space dependence is at least Omega(S-1.5) in the PAC sample complexity, which is notably harder than the (Theta) over tilde (S) scaling for fully-observable MDPs; (2) Any polynomial sublinear regret is at least Omega(T-2/3), suggesting its fundamental difference from the single-step case where (O) over tilde(root T) regret is achievable. Technically, our hard instance construction adapts techniques in distribution testing, which is new to the RL literature and may be of independent interest. We also complement our results with new sharp regret upper bounds for strongly B-stable PSRs, which include single-step revealing POMDPs as a special case.
引用
收藏
页数:58
相关论文
共 50 条
  • [41] Scalable Planning and Learning for Multiagent POMDPs
    Amato, Christopher
    Oliehoek, Frans A.
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 1995 - 2002
  • [42] Deep Variational Reinforcement Learning for POMDPs
    Igl, Maximilian
    Zintgraf, Luisa
    Le, Tuan Anh
    Wood, Frank
    Whiteson, Shimon
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [43] Bayesian Reinforcement Learning in Factored POMDPs
    Katt, Sammie
    Oliehoek, Frans A.
    Amato, Christopher
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 7 - 15
  • [44] ARE LOWER BOUNDS ON THE COMPLEXITY LOWER BOUNDS FOR UNIVERSAL CIRCUITS
    NIGMATULLIN, RG
    LECTURE NOTES IN COMPUTER SCIENCE, 1985, 199 : 331 - 340
  • [45] Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds
    Harkins, Ryan C.
    Hitchcock, John M.
    ACM TRANSACTIONS ON COMPUTATION THEORY, 2013, 5 (04)
  • [46] Circuit lower bounds from learning-theoretic approaches
    Kawachi, Akinori
    THEORETICAL COMPUTER SCIENCE, 2018, 733 : 83 - 98
  • [47] Exact Learning Algorithms, Betting Games, and Circuit Lower Bounds
    Harkins, Ryan C.
    Hitchcock, John M.
    AUTOMATA, LANGUAGES AND PROGRAMMING, ICALP, PT I, 2011, 6755 : 416 - 423
  • [48] Learning and Lower Bounds for AC0 with Threshold Gates
    Gopalan, Parikshit
    Servedio, Rocco A.
    APPROXIMATION, RANDOMIZATION, AND COMBINATORIAL OPTIMIZATION: ALGORITHMS AND TECHNIQUES, 2010, 6302 : 588 - +
  • [49] Conspiracies Between Learning Algorithms, Circuit Lower Bounds, and Pseudorandomness
    Oliveira, Igor C.
    Santhanam, Rahul
    32ND COMPUTATIONAL COMPLEXITY CONFERENCE (CCC 2017), 2017, 79
  • [50] Minimax Lower Bounds for Kronecker-Structured Dictionary Learning
    Shakeri, Zahra
    Bajwa, Waheed U.
    Sarwate, Anand D.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2016, : 1148 - 1152