Stochastic algorithm with optimal convergence rate for strongly convex optimization problems

被引:0
|
作者
机构
[1] Shao, Yan-Jian
[2] Tao, Qing
[3] Jiang, Ji-Yuan
[4] Zhou, Bai
来源
Shao, Yan-Jian | 1600年 / Chinese Academy of Sciences卷 / 25期
关键词
Convex optimization - Learning systems - Problem solving - Stochastic systems - Structural optimization;
D O I
10.13328/j.cnki.jos.004633
中图分类号
学科分类号
摘要
Stochastic gradient descent (SGD) is one of the efficient methods for dealing with large-scale data. Recent research shows that the black-box SGD method can reach an O(1/T) convergence rate for strongly-convex problems. However, for solving the regularized problem with L1 plus L2 terms, the convergence rate of the structural optimization method such as COMID (composite objective mirror descent) can only attain O(lnT/T). In this paper, a weighted algorithm based on COMID is presented, to keep the sparsity imposed by the L1 regularization term. A prove is provided to show that it achieves an O(1/T) convergence rate. Furthermore, the proposed scheme takes the advantage of computation on-the-fly so that the computational costs are reduced. The experimental results demonstrate the correctness of theoretic analysis and effectiveness of the proposed algorithm. © Copyright 2014, Institute of Software, the Chinese Academy of Science. All Rights Reserved.
引用
收藏
相关论文
共 50 条
  • [1] ON CONVERGENCE RATE OF DISTRIBUTED STOCHASTIC GRADIENT ALGORITHM FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
    Yuan, Deming
    Ho, Daniel W. C.
    Hong, Yiguang
    [J]. SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2016, 54 (05) : 2872 - 2892
  • [2] Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization
    Yuan, Kun
    Huang, Xinmeng
    Chen, Yiming
    Zhang, Xiaohan
    Zhang, Yingya
    Pan, Pan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Stochastic quasi-Newton methods for non-strongly convex problems: convergence and rate analysis
    Yousefian, Farzad
    Nedic, Angelia
    Shanbhag, Uday V.
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 4496 - 4503
  • [4] Optimal distributed stochastic mirror descent for strongly convex optimization
    Yuan, Deming
    Hong, Yiguang
    Ho, Daniel W. C.
    Jiang, Guoping
    [J]. AUTOMATICA, 2018, 90 : 196 - 203
  • [5] Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm
    Yuan, Deming
    Ho, Daniel W. C.
    Xu, Shengyuan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2344 - 2357
  • [6] Convergence Rate of a Penalty Method for Strongly Convex Problems with Linear Constraints
    Nedic, Angelia
    Tatarenko, Tatiana
    [J]. 2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 372 - 377
  • [7] An Optimal Algorithm for Bandit Convex Optimization with Strongly-Convex and Smooth Loss
    Ito, Shinji
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2229 - 2238
  • [8] A Stochastic Proximal Point Algorithm: Convergence and Application to Convex Optimization
    Bianchi, Pascal
    [J]. 2015 IEEE 6TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2015,
  • [9] Convergence in Distribution of Optimal Solutions to Stochastic Optimization Problems
    JINDE WANG (Dept. 0f Mathematics
    [J]. 运筹学学报, 1998, (01) : 1 - 7
  • [10] The First Optimal Algorithm for Smooth and Strongly-Convex-Strongly-Concave Minimax Optimization
    Kovalev, Dmitry
    Gasnikov, Alexander
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,