A stochastic variance reduced gradient method with adaptive step for stochastic optimization

被引:1
|
作者
Li, Jing [1 ]
Xue, Dan [1 ,2 ]
Liu, Lei [1 ]
Qi, Rulei [1 ]
机构
[1] Qingdao Univ, Sch Math & Stat, Qingdao, Peoples R China
[2] Qingdao Univ, Sch Math & Stat, Qingdao 266071, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Barzilai-Borwein step sizes; machine learning; stochastic programming; stochastic variance reduced gradient; BARZILAI-BORWEIN METHOD; SIZE;
D O I
10.1002/oca.3109
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a stochastic variance reduction gradient method with adaptive step size, referred to as the SVRG-New BB method, to solve the convex stochastic optimization problem. The method could be roughly viewed as a hybrid of the SVRG algorithm and a new BB step mechanism. Under the condition that the objective function is strongly convex, we provide the linear convergence proof of this algorithm. Numerical experiment results show that the performance of the SVRG-New BB algorithm can surpass other existing algorithms if parameters in the algorithm are properly chosen. This paper introduces the SVRG-New BB method, enhancing SVRG by incorporating the New BB step mechanism for adaptive step size calculation in stochastic variance reduction gradient methods. It eliminates the need for manual step size tuning, enhancing efficiency and applicability to complex problems in machine learning. SVRG-New BB has been shown to achieve linear convergence under strong convexity, and numerical experiments demonstrate its superior performance over existing stochastic gradient methods in solving regularized logistic regression problems. image
引用
下载
收藏
页码:1327 / 1342
页数:16
相关论文
共 50 条
  • [1] Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient Optimization
    Shang, Fanhua
    Liu, Yuanyuan
    Zhou, Kaiwen
    Cheng, James
    Ng, Kelvin K. W.
    Yoshida, Yuichi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [2] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Xu, Yangyang
    Xu, Yibo
    JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2023, 196 (01) : 266 - 297
  • [3] Momentum-Based Variance-Reduced Proximal Stochastic Gradient Method for Composite Nonconvex Stochastic Optimization
    Yangyang Xu
    Yibo Xu
    Journal of Optimization Theory and Applications, 2023, 196 : 266 - 297
  • [4] Stochastic Variance-Reduced Policy Gradient
    Papini, Matteo
    Binaghi, Damiano
    Canonaco, Giuseppe
    Pirotta, Matteo
    Restelli, Marcello
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [5] Variance Reduced Stochastic Gradient Descent with Neighbors
    Hofmann, Thomas
    Lucchi, Aurelien
    Lacoste-Julien, Simon
    McWilliams, Brian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [6] Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization
    Yu, Yue
    Huang, Longbo
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3364 - 3370
  • [7] Variance Reduced Stochastic Optimization for PCA and PLS
    Min, Erxue
    Cui, Jianjing
    Long, Jun
    2017 10TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL. 1, 2017, : 383 - 388
  • [8] Differentially Private Variance Reduced Stochastic Gradient Descent
    Lee, Jaewoo
    2017 INTERNATIONAL CONFERENCE ON NEW TRENDS IN COMPUTING SCIENCES (ICTCS), 2017, : 161 - 166
  • [9] Accelerating variance-reduced stochastic gradient methods
    Driggs, Derek
    Ehrhardt, Matthias J.
    Schonlieb, Carola-Bibiane
    MATHEMATICAL PROGRAMMING, 2022, 191 (02) : 671 - 715
  • [10] Accelerating variance-reduced stochastic gradient methods
    Derek Driggs
    Matthias J. Ehrhardt
    Carola-Bibiane Schönlieb
    Mathematical Programming, 2022, 191 : 671 - 715