Localization and Approximations for Distributed Non-convex Optimization

被引:0
|
作者
Kao, Hsu [1 ]
Subramanian, Vijay [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
关键词
Distributed optimization; Non-convex optimization; Localization; Proximal approximation; REGULARIZATION; CONVERGENCE;
D O I
10.1007/s10957-023-02328-8
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Distributed optimization has many applications, in communication networks, sensor networks, signal processing, machine learning, and artificial intelligence. Methods for distributed convex optimization are widely investigated, while those for non-convex objectives are not well understood. One of the first non-convex distributed optimization frameworks over an arbitrary interaction graph was proposed by Di Lorenzo and Scutari (IEEE Trans Signal Inf Process Netw 2:120-136, 2016), which iteratively applies a combination of local optimization with convex approximations and local averaging. Motivated by application problems such as the resource allocation problems in multi-cellular networks, we generalize the existing results in two ways. In the case when the decision variables are separable such that there is partial dependency in the objectives, we reduce the communication and memory complexity of the algorithm so that nodes only keep and communicate local variables instead of the whole vector of variables. In addition, we relax the assumption that the objectives' gradients are bounded and Lipschitz by means of successive proximal approximations. The proposed algorithmic framework is shown to be more widely applicable and numerically stable.
引用
收藏
页码:463 / 500
页数:38
相关论文
共 50 条
  • [31] Convex and Non-convex Optimization Under Generalized Smoothness
    Li, Haochuan
    Qian, Jian
    Tian, Yi
    Rakhlin, Alexander
    Jadbabaie, Ali
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] Trading Redundancy for Communication: Speeding up Distributed SGD for Non-convex Optimization
    Haddadpour, Farzin
    Kamani, Mohammad Mahdi
    Mahdavi, Mehrdad
    Cadambe, Viveck R.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [33] Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
    Juan Gao
    Xin-Wei Liu
    Yu-Hong Dai
    Yakui Huang
    Junhua Gu
    Computational Optimization and Applications, 2023, 84 : 531 - 572
  • [34] Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
    Gao, Juan
    Liu, Xin-Wei
    Dai, Yu-Hong
    Huang, Yakui
    Gu, Junhua
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 84 (02) : 531 - 572
  • [35] Weighted distributed differential privacy ERM: Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Wang, Weiping
    arXiv, 2019,
  • [36] Weighted distributed differential privacy ERM Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Niu, Ben
    Wang, Weiping
    COMPUTERS & SECURITY, 2021, 106
  • [37] Weighted distributed differential privacy ERM: Convex and non-convex
    Kang, Yilin
    Liu, Yong
    Niu, Ben
    Wang, Weiping
    Liu, Yong (liuyonggsai@ruc.edu.cn), 1600, Elsevier Ltd (106):
  • [38] SOURCE LOCALIZATION AND TRACKING IN NON-CONVEX ROOMS
    Oecal, Orhan
    Dokmanic, Ivan
    Vetterli, Martin
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [39] A new accelerating method for global non-convex quadratic optimization with non-convex quadratic constraints
    Wu, Huizhuo
    Zhang, KeCun
    APPLIED MATHEMATICS AND COMPUTATION, 2008, 197 (02) : 810 - 818
  • [40] Using non-convex approximations for efficient analysis of timed automata
    Herbreteau, Frederic
    Kini, Dileep
    Srivathsan, B.
    Walukiewicz, Igor
    IARCS ANNUAL CONFERENCE ON FOUNDATIONS OF SOFTWARE TECHNOLOGY AND THEORETICAL COMPUTER SCIENCE (FSTTCS 2011), 2011, 13 : 78 - 89