A Framework for Improving the Reliability of Black-box Variational Inference

被引:0
|
作者
Welandawe, Manushi [1 ]
Andersen, Michael Riis [2 ]
Vehtari, Aki [3 ]
Huggins, Jonathan H. [4 ]
机构
[1] Boston Univ, Dept Math & Stat, Boston, MA 02215 USA
[2] DTU Comp Tech Univ Denmark, Lyngby, Denmark
[3] Aalto Univ, Dept Comp Sci, Aalto, Finland
[4] Boston Univ, Fac Comp & Data Sci, Dept Math & Stat, Boston, MA USA
基金
美国国家卫生研究院;
关键词
black-box variational inference; symmetrized KL divergence; stochastic optimization; fixed-learning rate; STOCHASTIC-APPROXIMATION; WASSERSTEIN; BOUNDS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Black-box variational inference (BBVI) now sees widespread use in machine learning and statistics as a fast yet flexible alternative to Markov chain Monte Carlo methods for approximate Bayesian inference. However, stochastic optimization methods for BBVI remain unreliable and require substantial expertise and hand-tuning to apply effectively. In this paper, we propose robust and automated black-box VI (RABVI), a framework for improving the reliability of BBVI optimization. RABVI is based on rigorously justified automation techniques, includes just a small number of intuitive tuning parameters, and detects inaccurate estimates of the optimal variational approximation. RABVI adaptively decreases the learning rate by detecting convergence of the fixed-learning-rate iterates, then estimates the symmetrized Kullback-Leibler (KL) divergence between the current variational approximation and the optimal one. It also employs a novel optimization termination criterion that enables the user to balance desired accuracy against computational cost by comparing (i) the predicted relative decrease in the symmetrized KL divergence if a smaller learning were used and (ii) the predicted computation required to converge with the smaller learning rate. We validate the robustness and accuracy of RABVI through carefully designed simulation studies and on a diverse set of real-world model and data examples.
引用
收藏
页数:71
相关论文
共 50 条
  • [21] Black-Box Attribute Inference Protection With Adversarial Reprogramming
    Khorasgani, Hossein Abedi
    Wang, Yang
    Mohammed, Noman
    2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST, 2023, : 1 - 10
  • [22] Towards a Black-Box Security Evaluation Framework
    Ahmed, Mosabbah Mushir
    Souissi, Youssef
    Trabelsi, Oualid
    Guilley, Sylvain
    Bouvet, Antoine
    Takarabt, Sofiane
    SECURITY AND PRIVACY, ICSP 2021, 2021, 1497 : 79 - 92
  • [23] Testing Framework for Black-box AI Models
    Aggarwal, Aniya
    Shaikh, Samiulla
    Hans, Sandeep
    Haldar, Swastik
    Ananthanarayanan, Rema
    Saha, Diptikalyan
    2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2021), 2021, : 81 - 84
  • [24] Towards a framework for black-box simulation optimization
    Olafsson, S
    Kim, J
    WSC'01: PROCEEDINGS OF THE 2001 WINTER SIMULATION CONFERENCE, VOLS 1 AND 2, 2001, : 300 - 306
  • [25] Reliability of ordinal outcomes in forensic black-box studies
    Arora, Hina M.
    Kaplan-Damary, Naomi
    Stern, Hal S.
    FORENSIC SCIENCE INTERNATIONAL, 2024, 354
  • [26] Solving a class of constrained 'black-box' inverse variational inequalities
    He, Bingsheng
    He, Xiao-Zheng
    Liu, Henry X.
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2010, 204 (03) : 391 - 401
  • [27] THE BLACK-BOX
    KYLE, SA
    NEW SCIENTIST, 1986, 110 (1512) : 61 - 61
  • [28] THE BLACK-BOX
    WISEMAN, J
    ECONOMIC JOURNAL, 1991, 101 (404): : 149 - 155
  • [29] Scalable Inference for Gaussian Process Models with Black-Box Likelihoods
    Dezfouli, Amir
    Bonilla, Edwin V.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [30] Local Expectation Gradients for Black Box Variational Inference
    Titsias, Michalis K.
    Lazaro-Gredilla, Miguel
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28