Variance Reduction for Evolutionary Strategies via Structured Control Variates

被引:0
|
作者
Tang, Yunhao [1 ]
Choromanski, Krzysztof [2 ]
Kucukelbir, Alp [1 ,3 ]
机构
[1] Columbia Univ, New York, NY 10027 USA
[2] Google Robot, Mountain View, CA USA
[3] Fero Labs, New York, NY USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Evolution strategies (ES) are a powerful class of blackbox optimization techniques that recently became a competitive alternative to state-of-the-art policy gradient (PG) algorithms for reinforcement learning (RL). We propose a new method for improving accuracy of the ES algorithms, that as opposed to recent approaches utilizing only Monte Carlo structure of the gradient estimator, takes advantage of the underlying Markov decision process (MDP) structure to reduce the variance. We observe that the gradient estimator of the ES objective can be alternatively computed using reparametrization and PG estimators, which leads to new control variate techniques for gradient estimation in ES optimization. We provide theoretical insights and show through extensive experiments that this RL-specific variance reduction approach outperforms general purpose variance reduction methods.
引用
收藏
页数:10
相关论文
共 50 条