An Optimal Algorithm for Online Non-Convex Learning

被引:0
|
作者
Yang L. [1 ]
Deng L. [1 ]
Hajiesmaili M.H. [2 ]
Tan C. [1 ]
Wong W.S. [1 ]
机构
[1] Chinese University of Hong Kong, Hong Kong
[2] Johns Hopkins University, Baltimore, MD
来源
| 2018年 / Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States卷 / 46期
关键词
expert problem; lipschitz loss function; metric space; online convex optimization; online non-convex learning; online recursive weighting; regret;
D O I
10.1145/3219617.3219635
中图分类号
学科分类号
摘要
In many online learning paradigms, convexity plays a central role in the derivation and analysis of online learning algorithms. The results, however, fail to be extended to the non-convex settings, which are necessitated by tons of recent applications. The Online Non-Convex Learning problem generalizes the classic Online Convex Optimization framework by relaxing the convexity assumption on the cost function (to a Lipschitz continuous function) and the decision set. The state-of-the-art result for ønco demonstrates that the classic Hedge algorithm attains a sublinear regret of O(ĝsT log T). The regret lower bound for øco, however, is Omega(ĝsT), and to the best of our knowledge, there is no result in the context of the ønco problem achieving the same bound. This paper proposes the Online Recursive Weighting algorithm with regret of O(ĝsT), matching the tight regret lower bound for the øco problem, and fills the regret gap between the state-of-the-art results in the online convex and non-convex optimization problems. © 2018 ACM.
引用
收藏
页码:41 / 43
页数:2
相关论文
共 50 条
  • [1] An Optimal Algorithm for Online Non-Convex Learning
    Yang, Lin
    Deng, Lei
    Hajiesmaili, Mohammad H.
    Tan, Cheng
    Wong, Wing Shing
    PROCEEDINGS OF THE ACM ON MEASUREMENT AND ANALYSIS OF COMPUTING SYSTEMS, 2018, 2 (02)
  • [2] Online Non-Convex Learning: Following the Perturbed Leader is Optimal
    Suggala, Arun Sai
    Netrapalli, Praneeth
    ALGORITHMIC LEARNING THEORY, VOL 117, 2020, 117 : 845 - 861
  • [3] Non-convex online learning via algorithmic equivalence
    Ghai, Udaya
    Lu, Zhou
    Hazan, Elad
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [4] Online Learning with Non-Convex Losses and Non-Stationary Regret
    Gao, Xiang
    Li, Xiaobo
    Zhang, Shuzhong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [5] NO-REGRET NON-CONVEX ONLINE META-LEARNING
    Zhuang, Zhenxun
    Wang, Yunlong
    Yu, Kezi
    Lu, Songtao
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3942 - 3946
  • [6] Online Bandit Learning for a Special Class of Non-convex Losses
    Zhang, Lijun
    Yang, Tianbao
    Jin, Rong
    Zhou, Zhi-Hua
    PROCEEDINGS OF THE TWENTY-NINTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2015, : 3158 - 3164
  • [7] Online non-convex learning for river pollution source identification
    Huang, Wenjie
    Jiang, Jing
    Liu, Xiao
    IISE TRANSACTIONS, 2023, 55 (03) : 229 - 241
  • [8] Optimal, Stochastic, Non-smooth, Non-convex Optimization through Online-to-Non-convex Conversion
    Cutkosky, Ashok
    Mehta, Harsh
    Orabona, Francesco
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [9] Penalty boundary sequential convex programming algorithm for non-convex optimal control problems
    Zhang, Zhe
    Jin, Gumin
    Li, Jianxun
    ISA TRANSACTIONS, 2018, 72 : 229 - 244
  • [10] Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization
    Zhuang, Zhenxun
    Cutkosky, Ashok
    Orabona, Francesco
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97