POLYNOMIAL ESCAPE-TIME FROM SADDLE POINTS IN DISTRIBUTED NON-CONVEX OPTIMIZATION

被引:0
|
作者
Vlaski, Stefan [1 ]
Sayed, Ali H. [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Sch Engn, Lausanne, Switzerland
关键词
Stochastic optimization; adaptation; non-convex costs; saddle point; escape time; gradient noise; stationary points; distributed optimization; diffusion learning; DIFFUSION; NETWORKS;
D O I
10.1109/camsap45676.2019.9022458
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In this work we establish that agents cluster around a network centroid in the mean-fourth sense and proceeded to study the dynamics of this point. We establish expected descent in non-convex environments in the large-gradient regime and introduce a short-term model to examine the dynamics over finitetime horizons. Using this model, we establish that the diffusion strategy is able to escape from strict saddle-points in O(1/mu) iterations, where mu denotes the step-size; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process.
引用
收藏
页码:171 / 175
页数:5
相关论文
共 50 条
  • [1] Distributed Learning in Non-Convex Environments-Part II: Polynomial Escape From Saddle-Points
    Vlaski, Stefan
    Sayed, Ali H.
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 1257 - 1270
  • [2] Switched diffusion processes for non-convex optimization and saddle points search
    Journel, Lucas
    Monmarche, Pierre
    [J]. STATISTICS AND COMPUTING, 2023, 33 (06)
  • [3] Switched diffusion processes for non-convex optimization and saddle points search
    Lucas Journel
    Pierre Monmarché
    [J]. Statistics and Computing, 2023, 33
  • [4] LINEAR SPEEDUP IN SADDLE-POINT ESCAPE FOR DECENTRALIZED NON-CONVEX OPTIMIZATION
    Vlaski, Stefan
    Sayed, Ali H.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8589 - 8593
  • [5] Non-Convex Distributed Optimization
    Tatarenko, Tatiana
    Touri, Behrouz
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2017, 62 (08) : 3744 - 3757
  • [6] Zero duality and saddle points of a class of augmented Lagrangian functions in constrained non-convex optimization
    Liu, Qian
    Yang, Xinmin
    [J]. OPTIMIZATION, 2008, 57 (05) : 655 - 667
  • [7] Beyond Convex Relaxation: A Polynomial-Time Non-Convex Optimization Approach to Network Localization
    Ji, Senshan
    Sze, Kam-Fung
    Zhou, Zirui
    So, Anthony Man-Cho
    Ye, Yinyu
    [J]. 2013 PROCEEDINGS IEEE INFOCOM, 2013, : 2499 - 2507
  • [8] Localization and Approximations for Distributed Non-convex Optimization
    Hsu Kao
    Vijay Subramanian
    [J]. Journal of Optimization Theory and Applications, 2024, 200 : 463 - 500
  • [9] Localization and Approximations for Distributed Non-convex Optimization
    Kao, Hsu
    Subramanian, Vijay
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2024, 200 (02) : 463 - 500
  • [10] Escaping Saddle Points for Zeroth-order Non-convex Optimization using Estimated Gradient Descent
    Bai, Qinbo
    Agarwal, Mridul
    Aggarwal, Vaneet
    [J]. 2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 132 - 137