Mixtures of Variational Autoencoders

被引:4
|
作者
Ye, Fei [1 ]
Bors, Adrian G. [1 ]
机构
[1] Univ York, Dept Comp Sci, York YO10 5GH, N Yorkshire, England
关键词
Mixture models; Variational autoencoder; Hilbert-Schmidt Independence Criterion; APPROXIMATION;
D O I
10.1109/ipta50016.2020.9286619
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we develop a new deep mixture learning framework, aiming to learn underlying complex data structures. Each component in the mixture model is implemented using a Variational Autoencoder (VAE). VAE is a well known deep learning model which models a latent space data representation on a variational manifold. The mixing parameters are estimated from a Dirichlet distribution modelled by each encoder. In order to train this mixture model, named M-VAE, we derive a mixture evidence lower bound on the sample log-likelihood, which is optimized in order to jointly estimate all mixture components. We further propose to use the d-variables Hilbert-Schmidt Independence Criterion (dHSIC) as a regularization criterion in order to enforce the independence among the encoders' distributions. This criterion encourages the proposed mixture components to learn different data distributions and represent them in the latent space. During the experiments with the proposed M-VAE model we observe that it can be used for discovering disentangled data representations which can not be achieved with a single VAE.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Hierarchical Decompositional Mixtures of Variational Autoencoders
    Tan, Ping Liang
    Peharz, Robert
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [2] Mixture variational autoencoders
    Jiang, Shuoran
    Chen, Yarui
    Yang, Jucheng
    Zhang, Chuanlei
    Zhao, Tingting
    [J]. PATTERN RECOGNITION LETTERS, 2019, 128 : 263 - 269
  • [3] An Introduction to Variational Autoencoders
    Kingma, Diederik P.
    Welling, Max
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2019, 12 (04): : 4 - 89
  • [4] Subitizing with Variational Autoencoders
    Wever, Rijnder
    Runia, Tom F. H.
    [J]. COMPUTER VISION - ECCV 2018 WORKSHOPS, PT III, 2019, 11131 : 617 - 627
  • [5] Variational Laplace Autoencoders
    Park, Yookoon
    Kim, Chris Dongjoo
    Kim, Gunhee
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [6] Diffusion Variational Autoencoders
    Rey, Luis A. Perez
    Menkovski, Vlado
    Portegies, Jim
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2704 - 2710
  • [7] Overdispersed Variational Autoencoders
    Shah, Harshil
    Barber, David
    Botev, Aleksandar
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1109 - 1116
  • [8] Ladder Variational Autoencoders
    Sonderby, Casper Kaae
    Raiko, Tapani
    Maaloe, Lars
    Sonderby, Soren Kaae
    Winther, Ole
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [9] Tree Variational Autoencoders
    Manduchi, Laura
    Vandenhirtz, Moritz
    Ryser, Alain
    Vogt, Julia E.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] Clockwork Variational Autoencoders
    Saxena, Vaibhav
    Ba, Jimmy
    Hafner, Danijar
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34