A Framework for Improving the Reliability of Black-box Variational Inference

被引:0
|
作者
Welandawe, Manushi [1 ]
Andersen, Michael Riis [2 ]
Vehtari, Aki [3 ]
Huggins, Jonathan H. [4 ]
机构
[1] Boston Univ, Dept Math & Stat, Boston, MA 02215 USA
[2] DTU Comp Tech Univ Denmark, Lyngby, Denmark
[3] Aalto Univ, Dept Comp Sci, Aalto, Finland
[4] Boston Univ, Fac Comp & Data Sci, Dept Math & Stat, Boston, MA USA
基金
美国国家卫生研究院;
关键词
black-box variational inference; symmetrized KL divergence; stochastic optimization; fixed-learning rate; STOCHASTIC-APPROXIMATION; WASSERSTEIN; BOUNDS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial expertise and hand-tuning to apply effectively. In this paper, we propose robust and automated black-box VI (RABVI), a framework for improving the reliability of BBVI optimization. RABVI is based on rigorously justified automation techniques, includes just a small number of intuitive tuning parameters, and detects inaccurate estimates of the optimal variational approximation. RABVI adaptively decreases the learning rate by detecting convergence of the fixed-learning-rate iterates, then estimates the symmetrized Kullback-Leibler (KL) divergence between the current variational approximation and the optimal one. It also employs a novel optimization termination criterion that enables the user to balance desired accuracy against computational cost by comparing (i) the predicted relative decrease in the symmetrized KL divergence if a smaller learning were used and (ii) the predicted computation required to converge with the smaller learning rate. We validate the robustness and accuracy of RABVI through carefully designed simulation studies and on a diverse set of real-world model and data examples.
引用
收藏
页数:71
相关论文
共 50 条
  • [1] On the Convergence of Black-Box Variational Inference
    Kim, Kyurae
    Oh, Jisu
    Wu, Kaiwen
    Ma, Yi-An
    Gardner, Jacob R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Black-box Coreset Variational Inference
    Manousakas, Dionysis
    Ritter, Hippolyt
    Karaletsos, Theofanis
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Provable Smoothness Guarantees for Black-Box Variational Inference
    Domke, Justin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [4] Provable Smoothness Guarantees for Black-Box Variational Inference
    Domke, Justin
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [5] Black-Box Variational Inference as Distilled Langevin Dynamics
    Hoffman, Matthew
    Ma, Yi-An
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [6] Sample Average Approximation for Black-Box Variational Inference
    Burroni, Javier
    Domke, Justin
    Sheldon, Daniel
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 471 - 498
  • [7] Black-Box Variational Inference for Stochastic Differential Equations
    Ryder, Thomas
    Golightly, Andrew
    McGough, A. Stephen
    Prangle, Dennis
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [8] Provable convergence guarantees for black-box variational inference
    Domke, Justin
    Garrigos, Guillaume
    Gower, Robert
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] Marginalized Stochastic Natural Gradients for Black-Box Variational Inference
    Ji, Geng
    Sujono, Debora
    Sudderth, Erik B.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [10] Provable Gradient Variance Guarantees for Black-Box Variational Inference
    Domke, Justin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32