Stochastic Optimization with Importance Sampling for Regularized Loss Minimization

被引:0
|
作者
Zhao, Peilin [1 ,2 ,3 ]
Zhang, Tong [2 ,3 ]
机构
[1] ASTAR, Inst Infocomm Res, Data Analyt Dept, Singapore, Singapore
[2] Rutgers State Univ, Dept Stat & Biostat, Piscataway, NJ 08854 USA
[3] Baidu Res, Big Data Lab, Beijing, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37 | 2015年 / 37卷
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Uniform sampling of training data has been commonly used in traditional stochastic optimization algorithms such as Proximal Stochastic Mirror Descent (prox-SMD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although uniform sampling can guarantee that the sampled stochastic quantity is an unbiased estimate of the corresponding true quantity, the resulting estimator may have a rather high variance, which negatively affects the convergence of the underlying optimization procedure. In this paper we study stochastic optimization, including prox-SMD and prox-SDCA, with importance sampling, which improves the convergence rate by reducing the stochastic variance. We theoretically analyze the algorithms and empirically validate their effectiveness.
引用
收藏
页码:1 / 9
页数:9
相关论文
共 50 条
  • [31] Stochastic actions for diffusive dynamics: Reweighting, sampling, and minimization
    Adib, Artur B.
    JOURNAL OF PHYSICAL CHEMISTRY B, 2008, 112 (19): : 5910 - 5916
  • [32] A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control
    Bibi, Adel
    Bergou, El Houcine
    Sener, Ozan
    Ghanem, Bernard
    Richtarik, Peter
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 3275 - 3282
  • [33] Stochastic Loss Minimization for Power Distribution Networks
    Kekatos, Vassilis
    Wang, Gang
    Giannakis, Georgios B.
    2014 NORTH AMERICAN POWER SYMPOSIUM (NAPS), 2014,
  • [34] Stochastic optimization for global minimization and geostatistical calibration
    Jang, M
    Choe, J
    JOURNAL OF HYDROLOGY, 2002, 266 (1-2) : 40 - 52
  • [35] Forward Backward Splitting for Regularized Stochastic Optimization problems
    Luo, Dan
    Weng, Yang
    Hong, Wen-Xing
    2017 CHINESE AUTOMATION CONGRESS (CAC), 2017, : 7401 - 7406
  • [36] Regularized quasi-monotone method for stochastic optimization
    V. Kungurtsev
    V. Shikhman
    Optimization Letters, 2023, 17 : 1215 - 1228
  • [37] Regularized quasi-monotone method for stochastic optimization
    Kungurtsev, V
    Shikhman, V
    OPTIMIZATION LETTERS, 2023, 17 (05) : 1215 - 1228
  • [38] Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
    Zhang, Yuchen
    Xiao, Lin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 353 - 361
  • [39] On Stochastic Primal-Dual Hybrid Gradient Approach for Compositely Regularized Minimization
    Qiao, Linbo
    Lin, Tianyi
    Jiang, Yu-Gang
    Yang, Fan
    Liu, Wei
    Lu, Xicheng
    ECAI 2016: 22ND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, 285 : 167 - 174
  • [40] Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
    Zhang, Yuchen
    Xiao, Lin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18