Stochastic Second-Order Method for Large-Scale Nonconvex Sparse Learning Models

被引:0
|
作者
Gao, Hongchang [1 ]
Huang, Heng [1 ]
机构
[1] Univ Pittsburgh, Dept Elect & Comp Engn, Pittsburgh, PA 15260 USA
来源
PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE | 2018年
基金
美国国家科学基金会;
关键词
SIGNAL RECOVERY; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse learning models have shown promising performance in the high dimensional machine learning applications. The main challenge of sparse learning models is how to optimize it efficiently. Most existing methods solve this problem by relaxing it as a convex problem, incurring large estimation bias. Thus, the sparse learning model with nonconvex constraint has attracted much attention due to its better performance. But it is difficult to optimize due to the non-convexity. In this paper, we propose a linearly convergent stochastic second-order method to optimize this nonconvex problem for large-scale datasets. The proposed method incorporates the second-order information to improve the convergence speed. Theoretical analysis shows that our proposed method enjoys linear convergence rate and guarantees to converge to the underlying true model parameter. Experimental results have verified the efficiency and correctness of our proposed method.
引用
收藏
页码:2128 / 2134
页数:7
相关论文
共 50 条
  • [31] Computing transfer function dominant poles of large-scale second-order dynamical systems
    Rommes, Joost
    Martins, Nelson
    SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2008, 30 (04): : 2137 - 2157
  • [32] ON CONTROLLABILITY FOR A NONCONVEX SECOND-ORDER DIFFERENTIAL INCLUSION
    Cernea, Aurelian
    REVUE ROUMAINE DE MATHEMATIQUES PURES ET APPLIQUEES, 2013, 58 (02): : 139 - 147
  • [33] An Accelerated Second-Order Method for Distributed Stochastic Optimization
    Agafonov, Artem
    Dvurechensky, Pavel
    Scutari, Gesualdo
    Gasnikov, Alexander
    Kamzolov, Dmitry
    Lukashevich, Aleksandr
    Daneshmand, Amir
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2407 - 2413
  • [34] A Second-Order Method for Stochastic Bandit Convex Optimisation
    Lattimore, Tor
    Gyorgy, Andras
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195
  • [35] A fully stochastic second-order trust region method
    Curtis, Frank E.
    Shi, Rui
    Optimization Methods and Software, 2022, 37 (03): : 844 - 877
  • [36] A fully stochastic second-order trust region method
    Curtis, Frank E.
    Shi, Rui
    OPTIMIZATION METHODS & SOFTWARE, 2022, 37 (03): : 844 - 877
  • [37] A Stochastic Second-Order Proximal Method for Distributed Optimization
    Qiu, Chenyang
    Zhu, Shanying
    Ou, Zichong
    Lu, Jie
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 1405 - 1410
  • [38] Efficient Second-order Weak Scheme for Stochastic Volatility Models
    Jourdain, Benjamin
    Sbai, Mohamed
    SEMINAR ON STOCHASTIC ANALYSIS, RANDOM FIELDS AND APPLICATIONS VII, 2013, 67 : 395 - 410
  • [39] Large-Scale Stochastic Learning using GPUs
    Parnell, Thomas
    Dunner, Celestine
    Atasu, Kubilay
    Sifalakis, Manolis
    Pozidis, Haris
    2017 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2017, : 419 - 428
  • [40] Variance Counterbalancing for Stochastic Large-scale Learning
    Lagari, Pola Lydia
    Tsoukalas, Lefteri H.
    Lagaris, Isaac E.
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2020, 29 (05)