Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis

被引:0
|
作者
Nitanda, Atsushi [1 ,2 ]
Wu, Denny [3 ,4 ]
Suzuki, Taiji [2 ,5 ]
机构
[1] Kyushu Inst Technol, Kitakyushu, Fukuoka, Japan
[2] RIKEN Ctr Adv Intelligence Project, Tokyo, Japan
[3] Univ Toronto, Toronto, ON, Canada
[4] Vector Inst Artificial Intelligence, Toronto, ON, Canada
[5] Univ Tokyo, Tokyo, Japan
基金
加拿大自然科学与工程研究理事会;
关键词
LOGARITHMIC SOBOLEV INEQUALITIES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose the particle dual averaging (PDA) method, which generalizes the dual averaging method in convex optimization to the optimization over probability distributions with quantitative runtime guarantee. The algorithm consists of an inner loop and outer loop: the inner loop utilizes the Langevin algorithm to approximately solve for a stationary distribution, which is then optimized in the outer loop. The method can thus be interpreted as an extension of the Langevin algorithm to naturally handle nonlinear functional on the probability space. An important application of the proposed method is the optimization of neural network in the mean field regime, which is theoretically attractive due to the presence of nonlinear feature learning, but quantitative convergence rate can be challenging to obtain. By adapting finite-dimensional convex optimization theory into the space of measures, we analyze PDA in regularized empirical / expected risk minimization, and establish quantitative global convergence in learning two-layer mean field neural networks under more general settings. Our theoretical results are supported by numerical simulations on neural networks with reasonable size.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Decentralized Composite Optimization in Stochastic Networks: A Dual Averaging Approach With Linear Convergence
    Liu, Changxin
    Zhou, Zirui
    Pei, Jian
    Zhang, Yong
    Shi, Yang
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (08) : 4650 - 4665
  • [32] On Global Convergence in Design Optimization Using the Particle Swarm Optimization Technique
    Flocker, Forrest W.
    Bravo, Ramiro H.
    JOURNAL OF MECHANICAL DESIGN, 2016, 138 (08)
  • [33] Global convergence rate of recurrently connected neural networks
    Chen, TP
    Lu, WL
    Amari, S
    NEURAL COMPUTATION, 2002, 14 (12) : 2947 - 2957
  • [34] Ensemble Prediction of Monthly Mean Rainfall with a Particle Swarm Optimization-Neural Network Model
    Jin, Long
    Huang, Ying
    Zhao, Hua-sheng
    2012 IEEE 13TH INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION (IRI), 2012, : 287 - 294
  • [35] Global optimization of neural network weights
    Hamm, L
    Brorsen, BW
    Hagan, MT
    PROCEEDING OF THE 2002 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-3, 2002, : 1228 - 1233
  • [36] Global optimization for neural network training
    Shang, Y
    Wah, BW
    COMPUTER, 1996, 29 (03) : 45 - +
  • [37] Improving the convergence rate of the DIRECT global optimization algorithm
    Qunfeng Liu
    Guang Yang
    Zhongzhi Zhang
    Jinping Zeng
    Journal of Global Optimization, 2017, 67 : 851 - 872
  • [38] Improving the convergence rate of the DIRECT global optimization algorithm
    Liu, Qunfeng
    Yang, Guang
    Zhang, Zhongzhi
    Zeng, Jinping
    JOURNAL OF GLOBAL OPTIMIZATION, 2017, 67 (04) : 851 - 872
  • [39] MEAN-FIELD THEORY OF A NEURAL NETWORK
    COOPER, LN
    SCOFIELD, CL
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 1988, 85 (06) : 1973 - 1977
  • [40] Mean-field approximation with neural network
    Strausz, G
    INES'97 : 1997 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT ENGINEERING SYSTEMS, PROCEEDINGS, 1997, : 245 - 249