BayesFlow: Learning Complex Stochastic Models With Invertible Neural Networks

被引:56
|
作者
Radev, Stefan T. [1 ]
Mertens, Ulf K. [1 ]
Voss, Andreas [1 ]
Ardizzone, Lynton [2 ]
Koethe, Ullrich [2 ]
机构
[1] Heidelberg Univ, Inst Psychol, Dept Quantitat Res Methods, D-69117 Heidelberg, Germany
[2] Heidelberg Univ, Visual Learning Lab, Interdisciplinary Ctr Sci Comp IWR, D-69120 Heidelberg, Germany
关键词
Data models; Bayes methods; Biological system modeling; Neural networks; Training; Numerical models; Estimation; Bayesian inference; computational and artificial intelligence; machine learning; neural networks; statistical learning; LIKELIHOOD;
D O I
10.1109/TNNLS.2020.3042395
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks that we call BayesFlow. The method uses simulations to learn a global estimator for the probabilistic mapping from observed data to underlying model parameters. A neural network pretrained in this way can then, without additional training or optimization, infer full posteriors on arbitrarily many real data sets involving the same model family. In addition, our method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to modeling scenarios where standard inference techniques with handcrafted summary statistics fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science, and ecology. We argue that BayesFlow provides a general framework for building amortized Bayesian parameter estimation machines for any forward model from which data can be simulated.
引用
收藏
页码:1452 / 1466
页数:15
相关论文
共 50 条
  • [1] BAYESFLOW: AMORTIZED BAYESIAN WORKFLOWS WITH NEURAL NETWORKS
    Radev, Stefan T.
    Schmitt, Marvin
    Schumacher, Lukas
    Elsemüller, Lasse
    Pratz, Valentin
    Schälte, Yannik
    Köthe, Ullrich
    Bürkner, Paul-Christian
    [J]. arXiv, 2023,
  • [2] Stabilizing invertible neural networks using mixture models
    Hagemann, Paul
    Neumayer, Sebastian
    [J]. INVERSE PROBLEMS, 2021, 37 (08)
  • [3] Koopman operator learning using invertible neural networks
    Meng, Yuhuang
    Huang, Jianguo
    Qiu, Yue
    [J]. Journal of Computational Physics, 2024, 501
  • [4] Koopman operator learning using invertible neural networks ☆
    Meng, Yuhuang
    Huang, Jianguo
    Qiu, Yue
    [J]. JOURNAL OF COMPUTATIONAL PHYSICS, 2024, 501
  • [5] OvA-INN: Continual Learning with Invertible Neural Networks
    Hocquet, Guillaume
    Bichler, Olivier
    Querlioz, Damien
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] Memory Efficient Invertible Neural Networks for Class-Incremental Learning
    Hocquet, Guillaume
    Bichler, Olivier
    Querlioz, Damien
    [J]. 2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [7] Stochastic Competitive Learning in Complex Networks
    Silva, Thiago Christiano
    Zhao, Liang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (03) : 385 - 398
  • [8] RADYNVERSION: Learning to Invert a Solar Flare Atmosphere with Invertible Neural Networks
    Osborne, Christopher M. J.
    Armstrong, John A.
    Fletcher, Lyndsay
    [J]. ASTROPHYSICAL JOURNAL, 2019, 873 (02):
  • [9] Semi-stochastic complex neural networks
    Kanarachos, AE
    Geramanis, KT
    [J]. CONTROL APPLICATIONS & ERGONOMICS IN AGRICULTURE, 1999, : 47 - 52
  • [10] Generative invertible quantum neural networks
    Rousselot, Armand
    Spannowsky, Michael
    [J]. SCIPOST PHYSICS, 2024, 16 (06):