Guided Policy Search via Approximate Mirror Descent

被引:0
|
作者
Montgomery, William [1 ]
Levine, Sergey [1 ]
机构
[1] Univ Washington, Dept Comp Sci & Engn, Seattle, WA 98195 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016) | 2016年 / 29卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Guided policy search algorithms can be used to optimize complex nonlinear policies, such as deep neural networks, without directly computing policy gradients in the high-dimensional parameter space. Instead, these methods use supervised learning to train the policy to mimic a "teacher" algorithm, such as a trajectory optimizer or a trajectory-centric reinforcement learning method. Guided policy search methods provide asymptotic local convergence guarantees by construction, but it is not clear how much the policy improves within a small, finite number of iterations. We show that guided policy search algorithms can be interpreted as an approximate variant of mirror descent, where the projection onto the constraint manifold is not exact. We derive a new guided policy search algorithm that is simpler and provides appealing improvement and convergence guarantees in simplified convex and linear settings, and show that in the more general nonlinear setting, the error in the projection step can be bounded. We provide empirical results on several simulated robotic navigation and manipulation tasks that show that our method is stable and achieves similar or better performance when compared to prior guided policy search methods, with a simpler formulation and fewer hyperparameters.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Cluster Guided Truncated Hashing for Enhanced Approximate Nearest Neighbor Search
    Liu, Mingyang
    Yang, Zuyuan
    Han, Wei
    Xie, Shengli
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 181 - 185
  • [42] HCTree plus : A workload-guided index for approximate kNN search
    Li, Lingli
    Xu, Jie
    Li, Yu
    Cai, Jingwen
    INFORMATION SCIENCES, 2021, 581 : 876 - 890
  • [43] On the Convergence of Natural Policy Gradient and Mirror Descent-Like Policy Methods for Average-Reward MDPs
    Murthy, Yashaswini
    Srikant, R.
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 1979 - 1984
  • [44] A New Framework for Matrix Discrepancy: Partial Coloring Bounds via Mirror Descent
    Dadush, Daniel
    Jiang, Haotian
    Reis, Victor
    PROCEEDINGS OF THE 54TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING (STOC '22), 2022, : 649 - 658
  • [45] Mirror Descent of Hopfield Model
    Soh, Hyungjoon
    Kim, Dongyeob
    Hwang, Juno
    Jo, Junghyo
    NEURAL COMPUTATION, 2023, 35 (09) : 1529 - 1542
  • [46] Approximate Steepest Coordinate Descent
    Stich, Sebastian U.
    Raj, Anant
    Jaggi, Martin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [47] Policy search via density estimation
    Ng, AY
    Parr, R
    Koller, D
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 12, 2000, 12 : 1022 - 1028
  • [48] The Information Geometry of Mirror Descent
    Raskutti, Garvesh
    Mukherjee, Sayan
    GEOMETRIC SCIENCE OF INFORMATION, GSI 2015, 2015, 9389 : 359 - 368
  • [49] Lost in the mirror: A descent into depersonalization
    Malviya, Nayan
    Patel, Natasha
    Mehta, Varun S.
    INDIAN JOURNAL OF PSYCHIATRY, 2025, 67 : S204 - S204
  • [50] Stunt Driving via Policy Search
    Lau, Tak Kit
    Liu, Yun-hui
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 4699 - 4704