Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling

被引:0
|
作者
Wu, Keru [1 ]
Schmidler, Scott [1 ]
Chen, Yuansi [1 ]
机构
[1] Duke Univ, Dept Stat Sci, Durham, NC 27708 USA
基金
欧洲研究理事会;
关键词
Langevin algorithms; MCMC algorithms; Hamiltonian dynamics; Computa-tional complexity; Bayesian computation; LOWER BOUNDS; CONVERGENCE; HASTINGS; RATES; MCMC;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. We establish its optimal minimax mixing time under a warm start. Our main contribution is two-fold. First, for a d-dimensional log-concave density with condition number ?, we show that MALA with root a warm start mixes in O similar to(? d) iterations up to logarithmic factors. This improves upon the previous work on the dependency of either the condition number ? or the dimension d. Our proof relies on comparing the leapfrog integrator with the continuous Hamiltonian dynamics, where we establish a new concentration bound for the acceptance rate. Second, we prove a spectral gap based mixing time lower bound for reversible MCMC algorithms on general state spaces. We apply this lower bound result to construct a hard distribution root for which MALA requires at least ?similar to(? d) steps to mix. The lower bound for MALA matches our upper bound in terms of condition number and dimension. Finally, numerical experiments are included to validate our theoretical results.
引用
收藏
页码:1 / 63
页数:63
相关论文
共 50 条
  • [1] Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling
    Wu, Keru
    Schmidler, Scott
    Chen, Yuansi
    Journal of Machine Learning Research, 2022, 23
  • [2] Nonconvex sampling with the Metropolis-adjusted Langevin algorithm
    Mangoubi, Oren
    Vishnoi, Nisheeth K.
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [3] Resolving the Mixing Time of the Langevin Algorithm to its Stationary Distribution for Log-Concave Sampling
    Altschuler, Jason M.
    Talwar, Kunal
    THIRTY SIXTH ANNUAL CONFERENCE ON LEARNING THEORY, VOL 195, 2023, 195
  • [4] Langevin diffusions and the Metropolis-adjusted Langevin algorithm
    Xifara, T.
    Sherlock, C.
    Livingstone, S.
    Byrne, S.
    Girolami, M.
    STATISTICS & PROBABILITY LETTERS, 2014, 91 : 14 - 19
  • [5] autoMALA: Locally adaptive Metropolis-adjusted Langevin algorithm
    Biron-Lattes, Miguel
    Surjanovic, Nikola
    Syed, Saifuddin
    Campbell, Trevor
    Bouchard-Cote, Alexandre
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [6] Particle Metropolis-adjusted Langevin algorithms
    Nemeth, Christopher
    Sherlock, Chris
    Fearnhead, Paul
    BIOMETRIKA, 2016, 103 (03) : 701 - 717
  • [7] On the Computational Complexity of Metropolis-Adjusted Langevin Algorithms for Bayesian Posterior Sampling
    Tang, Rong
    Yang, Yun
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 79
  • [8] Implicit langevin algorithms for sampling from log-concave densities
    Hodgkinson, Liam
    Salomone, Robert
    Roosta, Fred
    Journal of Machine Learning Research, 2021, 22
  • [9] Implicit Langevin Algorithms for Sampling From Log-concave Densities
    Hodgkinson, Liam
    Salomone, Robert
    Roosta, Fred
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [10] Log-concave sampling: Metropolis-Hastings algorithms are fast
    Dwivedi, Raaz
    Chen, Yuansi
    Wainwright, Martin J.
    Yu, Bin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20