Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis

被引:0
|
作者
Nitanda, Atsushi [1 ,2 ]
Wu, Denny [3 ,4 ]
Suzuki, Taiji [2 ,5 ]
机构
[1] Kyushu Inst Technol, Kitakyushu, Fukuoka, Japan
[2] RIKEN Ctr Adv Intelligence Project, Tokyo, Japan
[3] Univ Toronto, Toronto, ON, Canada
[4] Vector Inst Artificial Intelligence, Toronto, ON, Canada
[5] Univ Tokyo, Tokyo, Japan
基金
加拿大自然科学与工程研究理事会;
关键词
LOGARITHMIC SOBOLEV INEQUALITIES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can thus be interpreted as an extension of the Langevin algorithm to naturally handle nonlinear functional on the probability space. An important application of the proposed method is the optimization of neural network in the mean field regime, which is theoretically attractive due to the presence of nonlinear feature learning, but quantitative convergence rate can be challenging to obtain. By adapting finite-dimensional convex optimization theory into the space of measures, we analyze PDA in regularized empirical / expected risk minimization, and establish quantitative global convergence in learning two-layer mean field neural networks under more general settings. Our theoretical results are supported by numerical simulations on neural networks with reasonable size.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Particle dual averaging: optimization of mean field neural network with global convergence rate analysis*
    Nitanda, Atsushi
    Wu, Denny
    Suzuki, Taiji
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2022, 2022 (11):
  • [2] Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
    Duchi, John C.
    Agarwal, Alekh
    Wainwright, Martin J.
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2012, 57 (03) : 592 - 606
  • [3] Rate analysis of dual averaging for nonconvex distributed optimization
    Liu, Changxin
    Wu, Xuyang
    Yi, Xinlei
    Shi, Yang
    Johansson, Karl H.
    IFAC PAPERSONLINE, 2023, 56 (02): : 5209 - 5214
  • [4] Global convergence algorithm of particle swarm optimization and its convergence analysis
    School of Information Technology, Jiangnan University, Wuxi 214122, China
    不详
    Kongzhi yu Juece Control Decis, 2009, 2 (196-201):
  • [5] Convergence Rate of Distributed Averaging Dynamics and Optimization in Networks
    Nedic, Angelia
    FOUNDATIONS AND TRENDS IN SYSTEMS AND CONTROL, 2015, 2 (01): : I - 100
  • [6] Rate of Convergence to Mean Field for Interacting Bosons
    Kuz, Elif
    COMMUNICATIONS IN PARTIAL DIFFERENTIAL EQUATIONS, 2015, 40 (10) : 1831 - 1854
  • [7] Improved particle swarm optimization algorithm and its global convergence analysis
    Mei, Congli
    Liu, Guohai
    Xiao, Xiao
    2010 CHINESE CONTROL AND DECISION CONFERENCE, VOLS 1-5, 2010, : 1662 - 1667
  • [8] PDE-constrained models with neural network terms: Optimization and global convergence
    Sirignano, Justin
    MacArt, Jonathan
    Spiliopoulos, Konstantinos
    JOURNAL OF COMPUTATIONAL PHYSICS, 2023, 481
  • [9] On the Global Convergence of Particle Swarm Optimization Methods
    Huang, Hui
    Qiu, Jinniao
    Riedl, Konstantin
    APPLIED MATHEMATICS AND OPTIMIZATION, 2023, 88 (02):
  • [10] On the Global Convergence of Particle Swarm Optimization Methods
    Hui Huang
    Jinniao Qiu
    Konstantin Riedl
    Applied Mathematics & Optimization, 2023, 88