Quasi-Newton updating for large-scale distributed learning

被引:1
|
作者
Wu, Shuyuan [1 ]
Huang, Danyang [2 ]
Wang, Hansheng [3 ]
机构
[1] Shanghai Univ Finance & Econ, Sch Stat & Management, Shanghai, Peoples R China
[2] Renmin Univ China, Ctr Appl Stat, Sch Stat, 59 Zhongguancun St, Beijing 100872, Peoples R China
[3] Peking Univ, Guanghua Sch Management, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
communication efficiency; computation efficiency; distributed system; quasi-Newton methods; statistical efficiency; CONVERGENCE;
D O I
10.1093/jrsssb/qkad059
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
Distributed computing is critically important for modern statistical analysis. Herein, we develop a distributed quasi-Newton (DQN) framework with excellent statistical, computation, and communication efficiency. In the DQN method, no Hessian matrix inversion or communication is needed. This considerably reduces the computation and communication complexity of the proposed method. Notably, related existing methods only analyse numerical convergence and require a diverging number of iterations to converge. However, we investigate the statistical properties of the DQN method and theoretically demonstrate that the resulting estimator is statistically efficient over a small number of iterations under mild conditions. Extensive numerical analyses demonstrate the finite sample performance.
引用
收藏
页码:1326 / 1354
页数:29
相关论文
共 50 条
  • [12] A Class of Diagonal Quasi-Newton Methods for Large-Scale Convex Minimization
    Wah June Leong
    Bulletin of the Malaysian Mathematical Sciences Society, 2016, 39 : 1659 - 1672
  • [13] A Stochastic Quasi-Newton Method for Large-Scale Nonconvex Optimization With Applications
    Chen, Huiming
    Wu, Ho-Chun
    Chan, Shing-Chow
    Lam, Wong-Hing
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (11) : 4776 - 4790
  • [14] A Sequential Subspace Quasi-Newton Method for Large-Scale Convex Optimization
    Senov, Aleksandr
    Granichin, Oleg
    Granichina, Olga
    2020 AMERICAN CONTROL CONFERENCE (ACC), 2020, : 3627 - 3632
  • [15] A NEW TYPE OF QUASI-NEWTON UPDATING FORMULAS BASED ON THE NEW QUASI-NEWTON EQUATION
    Hassan, Basim A.
    NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2020, 10 (02): : 227 - 235
  • [16] Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods
    Sohl-Dickstein, Jascha
    Poole, Ben
    Ganguli, Surya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 604 - 612
  • [17] Accelerating Limited-Memory Quasi-Newton Convergence for Large-Scale Optimization
    Dener, Alp
    Munson, Todd
    COMPUTATIONAL SCIENCE - ICCS 2019, PT III, 2019, 11538 : 495 - 507
  • [18] Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization
    Kanzow, Christian
    Steck, Daniel
    MATHEMATICAL PROGRAMMING COMPUTATION, 2023, 15 (03) : 417 - 444
  • [19] Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization
    Christian Kanzow
    Daniel Steck
    Mathematical Programming Computation, 2023, 15 : 417 - 444
  • [20] UPDATING QUASI-NEWTON MATRICES WITH LIMITED STORAGE
    NOCEDAL, J
    MATHEMATICS OF COMPUTATION, 1980, 35 (151) : 773 - 782