Online Sampling from Log-Concave Distributions

被引:0
|
作者
Lee, Holden [1 ]
Mangoubi, Oren [2 ]
Vishnoi, Nisheeth K. [3 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Worcester Polytech Inst, Worcester, MA 01609 USA
[3] Yale Univ, New Haven, CT 06520 USA
基金
瑞士国家科学基金会;
关键词
BINARY;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Given a sequence of convex functions integral(0), integral(1),...,integral(T) we study the problem of sampling from the Gibbs distribution pi(t) proportional to e(-) Sigma(t)(k=0) f(k) for each epoch t in an online manner. Interest in this problem derives from applications in machine learning, Bayesian statistics, and optimization where, rather than obtaining all the observations at once, one constantly acquires new data, and must continuously update the distribution. Our main result is an algorithm that generates roughly independent samples from pi(t) for every epoch t and, under mild assumptions, makes polylog(T) gradient evaluations per epoch. All previous results imply a bound on the number of gradient or function evaluations which is at least linear in T. Motivated by real-world applications, we assume that functions are smooth, their associated distributions have a bounded second moment, and their minimizer drifts in a bounded manner, but do not assume they are strongly convex. In particular, our assumptions hold for online Bayesian logistic regression, when the data satisfy natural regularity properties, giving a sampling algorithm with updates that are poly-logarithmic in T. In simulations, our algorithm achieves accuracy comparable to an algorithm specialized to logistic regression. Key to our algorithm is a novel stochastic gradient Langevin dynamics Markov chain with a carefully designed variance reduction step and constant batch size. Technically, lack of strong convexity is a significant barrier to analysis and, here, our main contribution is a martingale exit time argument that shows our Markov chain remains in a ball of radius roughly poly-logarithmic in T for enough time to reach within epsilon of pi(t).
引用
收藏
页数:12
相关论文
共 50 条
  • [1] SAMPLING FROM LOG-CONCAVE DISTRIBUTIONS
    Frieze, Alan
    Kannan, Ravi
    Polson, Nick
    ANNALS OF APPLIED PROBABILITY, 1994, 4 (03): : 812 - 837
  • [2] Log-Sobolev inequalities and sampling from log-concave distributions
    Frieze, A
    Kannan, R
    ANNALS OF APPLIED PROBABILITY, 1999, 9 (01): : 14 - 26
  • [3] A Rejection Technique for Sampling from Log-Concave Multivariate Distributions
    Leydold, Josef
    ACM Transactions on Modeling and Computer Simulation, 1998, 8 (03): : 254 - 280
  • [4] Log-concave and concave distributions in reliability
    Sengupta, D
    Nanda, AK
    NAVAL RESEARCH LOGISTICS, 1999, 46 (04) : 419 - 433
  • [5] Sampling from Log-Concave Distributions with Infinity-Distance Guarantees
    Mangoubi, Oren
    Vishnoi, Nisheeth K.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Efficient Sampling from Time-Varying Log-Concave Distributions
    Narayanan, Hariharan
    Rakhlin, Alexander
    JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18
  • [7] Complexity of zigzag sampling algorithm for strongly log-concave distributions
    Lu, Jianfeng
    Wang, Lihan
    STATISTICS AND COMPUTING, 2022, 32 (03)
  • [8] Complexity of zigzag sampling algorithm for strongly log-concave distributions
    Jianfeng Lu
    Lihan Wang
    Statistics and Computing, 2022, 32
  • [9] SAMPLING FROM LOG-CONCAVE DISTRIBUTIONS (vol 3, pg 812, 1994)
    Frieze, Alan
    Kannan, Ravi
    Polson, Nick
    ANNALS OF APPLIED PROBABILITY, 1994, 4 (04): : 1255 - 1255
  • [10] Efficient Sampling of Non Log-Concave Posterior Distributions With Mixture of Noises
    Palud, Pierre
    Thouvenin, Pierre-Antoine
    Chainais, Pierre
    Bron, Emeric
    Le Petit, Franck
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 2491 - 2501