A Stochastic Second-Order Proximal Method for Distributed Optimization

被引:2
|
作者
Qiu, Chenyang [1 ]
Zhu, Shanying [2 ]
Ou, Zichong [1 ]
Lu, Jie [3 ,4 ]
机构
[1] Shanghai Tech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
[2] Shanghai Jiao Tong Univ, Dept Automation, Key Lab Syst Control & Informat Proc, Shanghai 200240, Peoples R China
[3] ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 201210, Peoples R China
[4] ShanghaiTech Univ, Shanghai Engn Res Ctr Energy Efficient & Custom AI, Shanghai 201210, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Convergence; Optimization; Approximation algorithms; Lagrangian functions; Upper bound; Stochastic processes; Taylor series; Distributed optimization; second-order method; stochastic optimization; ALGORITHM;
D O I
10.1109/LCSYS.2023.3244740
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a distributed stochastic second-order proximal (St-SoPro) method that enables agents in a network to cooperatively minimize the sum of their local loss functions without any centralized coordination. St-SoPro incorporates a decentralized second-order approximation into an augmented Lagrangian function, and randomly samples the local gradients and Hessian matrices to update, so that it is efficient in solving large-scale problems. We show that for restricted strongly convex and smooth problems, the agents linearly converge in expectation to a neighborhood of the optimum, and the neighborhood can be arbitrarily small under proper parameter settings. Simulations over real machine learning datasets demonstrate that St-SoPro outperforms several state-of-the-art methods in terms of convergence speed as well as computation and communication costs.
引用
收藏
页码:1405 / 1410
页数:6
相关论文
共 50 条
  • [1] An Accelerated Second-Order Method for Distributed Stochastic Optimization
    Agafonov, Artem
    Dvurechensky, Pavel
    Scutari, Gesualdo
    Gasnikov, Alexander
    Kamzolov, Dmitry
    Lukashevich, Aleksandr
    Daneshmand, Amir
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2407 - 2413
  • [2] Inexact proximal stochastic second-order methods for nonconvex composite optimization
    Wang, Xiao
    Zhang, Hongchao
    OPTIMIZATION METHODS & SOFTWARE, 2020, 35 (04): : 808 - 835
  • [3] A Second-Order Proximal Algorithm for Consensus Optimization
    Wu, Xuyang
    Qu, Zhihai
    Lu, Jie
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2021, 66 (04) : 1864 - 1871
  • [4] Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
    Lin, Dachao
    Han, Yuze
    Ye, Haishan
    Zhang, Zhihua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] The rate of convergence of proximal method of multipliers for second-order cone optimization problems
    Li Chu
    Bo Wang
    Liwei Zhang
    Hongwei Zhang
    Optimization Letters, 2021, 15 : 441 - 457
  • [6] The rate of convergence of proximal method of multipliers for second-order cone optimization problems
    Chu, Li
    Wang, Bo
    Zhang, Liwei
    Zhang, Hongwei
    OPTIMIZATION LETTERS, 2021, 15 (02) : 441 - 457
  • [7] Distributed Nash equilibrium learning: A second-order proximal algorithm
    Pan, Wei
    Lu, Yu
    Jia, Zehua
    Zhang, Weidong
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2021, 31 (13) : 6392 - 6409
  • [8] Distributed proximal-gradient algorithms for nonsmooth convex optimization of second-order multiagent systems
    Wang, Qing
    Chen, Jie
    Zeng, Xianlin
    Xin, Bin
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2020, 30 (17) : 7574 - 7592
  • [9] Distributed Optimization Control for the System with Second-Order Dynamic
    Wang, Yueqing
    Zhang, Hao
    Li, Zhi
    MATHEMATICS, 2024, 12 (21)
  • [10] Linearly Convergent Second-Order Distributed Optimization Algorithms
    Qu, Zhihai
    Li, Xiuxian
    Li, Li
    Hong, Yiguang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (08) : 5431 - 5438