Online learning via congregational gradient descent

被引:0
|
作者
Kim L. Blackmore
Robert C. Williamson
Iven M. Y. Mareels
William A. Sethares
机构
[1] DSTO C3 Research Centre,Department of Defence
[2] Fernhill Park,Department of Engineering
[3] Australian National University,Department of Electrical and Electronic Engineering
[4] The University of Melbourne,Department of Electrical and Computer Engineering
[5] University of Wisconsin-Madison,undefined
关键词
Online learning; Genetic algorithm; Gradient descent;
D O I
暂无
中图分类号
学科分类号
摘要
We propose and analyse a populational version of stepwise gradient descent suitable for a wide range of learning problems. The algorithm is motivated by genetic algorithms which update a population of solutions rather than just a single representative as is typical for gradient descent. This modification of traditional gradient descent (as used, for example, in the backpropogation algorithm) avoids getting trapped in local minima. We use an averaging analysis of the algorithm to relate its behaviour to an associated ordinary differential equation. We derive a result concerning how long one has to wait in order that, with a given high probability, the algorithm is within a certain neighbourhood of the global minimum. We also analyse the effect of different population sizes. An example is presented which corroborates our theory very well.
引用
收藏
页码:331 / 363
页数:32
相关论文
共 50 条
  • [1] Online learning via congregational gradient descent
    Blackmore, RL
    Williamson, RC
    Mareels, IMY
    Sethares, WA
    [J]. MATHEMATICS OF CONTROL SIGNALS AND SYSTEMS, 1997, 10 (04) : 331 - 363
  • [2] LEARNING BY ONLINE GRADIENT DESCENT
    BIEHL, M
    SCHWARZE, H
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1995, 28 (03): : 643 - 656
  • [3] Online Gradient Descent Learning Algorithms
    Yiming Ying
    Massimiliano Pontil
    [J]. Foundations of Computational Mathematics, 2008, 8 : 561 - 596
  • [4] Online gradient descent learning algorithms
    Ying, Yiming
    Pontil, Massimiliano
    [J]. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2008, 8 (05) : 561 - 596
  • [5] Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators
    Cui, Wenhai
    Ji, Xiaoting
    Kong, Linglong
    Yan, Xiaodong
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7270 - 7278
  • [6] Learning General Halfspaces with Adversarial Label Noise via Online Gradient Descent
    Diakonikolas, Ilias
    Kontonis, Vasilis
    Tzamos, Christos
    Zarifis, Nikos
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] Dual Space Gradient Descent for Online Learning
    Trung Le
    Tu Dinh Nguyen
    Vu Nguyen
    Dinh Phung
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [8] Learning ReLUs via Gradient Descent
    Soltanolkotabi, Mandi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [9] Online Learning With Inexact Proximal Online Gradient Descent Algorithms
    Dixit, Rishabh
    Bedi, Unlit Singh
    Tripathi, Ruchi
    Rajawat, Ketan
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (05) : 1338 - 1352
  • [10] Multileave Gradient Descent for Fast Online Learning to Rank
    Schuth, Anne
    Oosterhuis, Harrie
    Whiteson, Shimon
    de Rijke, Maarten
    [J]. PROCEEDINGS OF THE NINTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'16), 2016, : 457 - 466