Private (Stochastic) Non-Convex Optimization Revisited: Second-Order Stationary Points and Excess Risks

被引:0
|
作者
Ganesh, Arun [1 ]
Liu, Daogao [2 ]
Oh, Sewoong [1 ,2 ]
Thakurta, Abhradeep [3 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Univ Washington, Seattle, WA USA
[3] Google DeepMind, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We reconsider the challenge of non-convex optimization under differential privacy constraint. Building upon the previous variance-reduced algorithm Spider-Boost, we propose a novel framework that employs two types of gradient oracles: one that estimates the gradient at a single point and a more cost-effective option that calculates the gradient difference between two points. Our framework can ensure continuous accuracy of gradient estimations and subsequently enhances the rates of identifying second-order stationary points. Additionally, we consider a more challenging task by attempting to locate the global minima of a non-convex objective via the exponential mechanism without almost any assumptions. Our preliminary results suggest that the regularized exponential mechanism can effectively emulate previous empirical and population risk bounds, negating the need for smoothness assumptions for algorithms with polynomial running time. Furthermore, with running time factors excluded, the exponential mechanism demonstrates promising population risk bound performance, and we provide a nearly matching lower bound.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Faster First-Order Methods for Stochastic Non-Convex Optimization on Riemannian Manifolds
    Zhou, Pan
    Yuan, Xiao-Tong
    Yan, Shuicheng
    Feng, Jiashi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (02) : 459 - 472
  • [32] A STOCHASTIC APPROACH TO THE CONVEX OPTIMIZATION OF NON-CONVEX DISCRETE ENERGY SYSTEMS
    Burger, Eric M.
    Moura, Scott J.
    PROCEEDINGS OF THE ASME 10TH ANNUAL DYNAMIC SYSTEMS AND CONTROL CONFERENCE, 2017, VOL 3, 2017,
  • [33] Adaptive Stochastic Gradient Descent Method for Convex and Non-Convex Optimization
    Chen, Ruijuan
    Tang, Xiaoquan
    Li, Xiuting
    FRACTAL AND FRACTIONAL, 2022, 6 (12)
  • [34] A Second-Order Method for Stochastic Bandit Convex Optimisation
    Lattimore, Tor
    Gyorgy, Andras
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195
  • [35] Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization
    Luo, Luo
    Li, Yujun
    Chen, Cheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [36] Riemannian Stochastic Recursive Momentum Method for non-Convex Optimization
    Han, Andi
    Gao, Junbin
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2505 - 2511
  • [37] Superfast Second-Order Methods for Unconstrained Convex Optimization
    Yurii Nesterov
    Journal of Optimization Theory and Applications, 2021, 191 : 1 - 30
  • [38] Second-order asymptotic analysis for noncoercive convex optimization
    F. Lara
    Mathematical Methods of Operations Research, 2017, 86 : 469 - 483
  • [39] Second-order asymptotic analysis for noncoercive convex optimization
    Lara, F.
    MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2017, 86 (03) : 469 - 483
  • [40] A family of second-order methods for convex -regularized optimization
    Byrd, Richard H.
    Chin, Gillian M.
    Nocedal, Jorge
    Oztoprak, Figen
    MATHEMATICAL PROGRAMMING, 2016, 159 (1-2) : 435 - 467