Distributed gradient-free and projection-free algorithm for stochastic constrained optimization

被引:0
|
作者
Hou J. [1 ]
Zeng X. [1 ]
Chen C. [1 ]
机构
[1] National Key Laboratory of Autonomous Intelligent Unmanned Systems, School of Automation, Beijing Institute of Technology, Beijing
来源
Autonomous Intelligent Systems | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Distributed optimization; Projection-free method; Stochastic constrained optimization; Zeroth-order optimization;
D O I
10.1007/s43684-024-00062-0
中图分类号
学科分类号
摘要
Distributed stochastic zeroth-order optimization (DSZO), in which the objective function is allocated over multiple agents and the derivative of cost functions is unavailable, arises frequently in large-scale machine learning and reinforcement learning. This paper introduces a distributed stochastic algorithm for DSZO in a projection-free and gradient-free manner via the Frank-Wolfe framework and the stochastic zeroth-order oracle (SZO). Such a scheme is particularly useful in large-scale constrained optimization problems where calculating gradients or projection operators is impractical, costly, or when the objective function is not differentiable everywhere. Specifically, the proposed algorithm, enhanced by recursive momentum and gradient tracking techniques, guarantees convergence with just a single batch per iteration. This significant improvement over existing algorithms substantially lowers the computational complexity. Under mild conditions, we prove that the complexity bounds on SZO of the proposed algorithm are O(n/ϵ2) and O(n(21)) for convex and nonconvex cases, respectively. The efficacy of the algorithm is verified on black-box binary classification problems against several competing alternatives. © The Author(s) 2024.
引用
收藏
相关论文
共 50 条
  • [21] A distributed gradient algorithm based on randomized block-coordinate and projection-free over networks
    Zhu, Junlong
    Wang, Xin
    Zhang, Mingchuan
    Liu, Muhua
    Wu, Qingtao
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (01) : 267 - 283
  • [22] Distributed Quantized Gradient-Free Algorithm for Multi-Agent Convex Optimization
    Ding, Jingjing
    Yuan, Deming
    Jiang, Guoping
    Zhou, Yingjiang
    2017 29TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2017, : 6431 - 6435
  • [23] A distributed gradient algorithm based on randomized block-coordinate and projection-free over networks
    Junlong Zhu
    Xin Wang
    Mingchuan Zhang
    Muhua Liu
    Qingtao Wu
    Complex & Intelligent Systems, 2023, 9 : 267 - 283
  • [24] Gradient-free algorithms for distributed online convex optimization
    Liu, Yuhang
    Zhao, Wenxiao
    Dong, Daoyi
    ASIAN JOURNAL OF CONTROL, 2023, 25 (04) : 2451 - 2468
  • [25] INCREMENTAL GRADIENT-FREE METHOD FOR NONSMOOTH DISTRIBUTED OPTIMIZATION
    Li, Jueyou
    Li, Guoquan
    Wu, Zhiyou
    Wu, Changzhi
    Wang, Xiangyu
    Lee, Jae-Myung
    Jung, Kwang-Hyo
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2017, 13 (04) : 1841 - 1857
  • [26] Fully Projection-Free Proximal Stochastic Gradient Method With Optimal Convergence Rates
    Li, Yan
    Cao, Xiaofeng
    Chen, Honghui
    IEEE ACCESS, 2020, 8 : 165904 - 165912
  • [27] Efficient projection-free online convex optimization using stochastic gradients
    Jiahao Xie
    Chao Zhang
    Zebang Shen
    Hui Qian
    Machine Learning, 2025, 114 (4)
  • [28] Distributed Optimization With Projection-Free Dynamics: A Frank-Wolfe Perspective
    Chen, Guanpu
    Yi, Peng
    Hong, Yiguang
    Chen, Jie
    IEEE TRANSACTIONS ON CYBERNETICS, 2024, 54 (01) : 599 - 610
  • [29] A PROJECTION-FREE DECENTRALIZED ALGORITHM FOR NON-CONVEX OPTIMIZATION
    Wai, Hoi-To
    Scaglione, Anna
    Lafond, Jean
    Moulines, Eric
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 475 - 479
  • [30] Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
    Chen, Lesi
    Xu, Jing
    Luo, Luo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202