Continuous Mixtures of Tractable Probabilistic Models

被引:0
|
作者
Correia, Alvaro H. C. [1 ]
Gala, Gennaro [1 ]
Quaeghebeur, Erik [1 ]
de Campos, Cassio [1 ]
Peharz, Robert [1 ,2 ]
机构
[1] Eindhoven Univ Technol, Eindhoven, Netherlands
[2] Graz Univ Technol, Graz, Austria
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Probabilistic models based on continuous latent spaces, such as variational autoencoders, can be understood as uncountable mixture models where components depend continuously on the latent code. They have proven to be expressive tools for generative and probabilistic modelling, but are at odds with tractable probabilistic inference, that is, computing marginals and conditionals of the represented probability distribution. Meanwhile, tractable probabilistic models such as probabilistic circuits (PCs) can be understood as hierarchical discrete mixture models, and thus are capable of performing exact inference efficiently but often show subpar performance in comparison to continuous latent-space models. In this paper, we investigate a hybrid approach, namely continuous mixtures of tractable models with a small latent dimension. While these models are analytically intractable, they are well amenable to numerical integration schemes based on a finite set of integration points. With a large enough number of integration points the approximation becomes de-facto exact. Moreover, for a finite set of integration points, the integration method effectively compiles the continuous mixture into a standard PC. In experiments, we show that this simple scheme proves remarkably effective, as PCs learnt this way set new state of the art for tractable models on many standard density estimation benchmarks.
引用
收藏
页码:7244 / 7252
页数:9
相关论文
共 50 条
  • [1] Probabilistic Tractable Models in Mixed Discrete-Continuous Domains
    Bueff, Andreas
    Speichert, Stefanie
    Belle, Vaishak
    DATA INTELLIGENCE, 2021, 3 (02) : 228 - 260
  • [2] Learning Distributionally Robust Tractable Probabilistic Models in Continuous Domains
    Dong, Hailiang
    Amato, James
    Gogate, Vibhav
    Ruozzi, Nicholas
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2024, 244 : 1176 - 1188
  • [3] Probabilistic Tractable Models in Mixed Discrete-Continuous Domains
    Andreas Bueff
    Stefanie Speichert
    Vaishak Belle
    Data Intelligence, 2021, (02) : 228 - 260
  • [4] Causality and tractable probabilistic models
    Cruz, David
    Batista, Jorge
    FRONTIERS IN COMPUTER SCIENCE, 2024, 5
  • [5] Deep Tractable Probabilistic Models
    Sidheekh, Sahil
    Mathur, Saurabh
    Karanam, Athresh
    Natarajan, Sriraam
    PROCEEDINGS OF 7TH JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MANAGEMENT OF DATA, CODS-COMAD 2024, 2024, : 501 - 504
  • [6] Robust Learning of Tractable Probabilistic Models
    Peddi, Rohith
    Rahman, Tahrima
    Gogate, Vibhav
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 1572 - 1581
  • [7] Tractable Probabilistic Models for Ethical AI
    Belle, Vaishak
    GRAPH-BASED REPRESENTATION AND REASONING, ICCS 2022, 2022, 13403 : 3 - 8
  • [8] Tractable Operations for Arithmetic Circuits of Probabilistic Models
    Shen, Yujia
    Choi, Arthur
    Darwiche, Adnan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [9] Learning Tractable Probabilistic Models for Fault Localization
    Nath, Aniruddh
    Domingos, Pedro
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1294 - 1301
  • [10] Learning tractable probabilistic models for moral responsibility and blame
    Hammond, Lewis
    Belle, Vaishak
    DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 35 (02) : 621 - 659