Affine Variational Autoencoders

被引:1
|
作者
Bidart, Rene [1 ,2 ]
Wong, Alexander [1 ,2 ]
机构
[1] Waterloo Artificial Intelligence Inst, Waterloo, ON, Canada
[2] Univ Waterloo, Waterloo, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Deep learning; Variational autoencoders; Image generation; Perturbation;
D O I
10.1007/978-3-030-27202-9_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Variational autoencoders (VAEs) have in recent years become one of the most powerful approaches to learning useful latent representations of data in an unsupervised manner. However, a major challenge with VAEs is that they have tremendous difficulty in generalizing to data that deviate from the training set (e.g., perturbed image variants). Normally data augmentation is leveraged to overcome this limitation; however, this is not only computational expensive but also necessitates the construction of more complex models. In this study, we introduce the notion of affine variational autoencoders (AVAEs), which extends upon the conventional VAE architecture through the introduction of affine layers. More specifically, within the AVAE architecture an affine layer perturbs the input image prior to the encoder, and a second affine layer performs an inverse perturbation to the output of the decoder. The parameters of the affine layers are learned to enable the AVAE to encode images at canonical perturbations, resulting in a better reconstruction and a disentangled latent space without the need for data augmentation or the use of more complex models. Experimental results demonstrate the efficacy of the proposed VAE architecture for generalizing to images in the MNIST validation set under affine perturbations without the need for data augmentation, demonstrating significantly reduced loss when compared to conventional VAEs.
引用
收藏
页码:461 / 472
页数:12
相关论文
共 50 条
  • [41] Estimating TOA Reliability With Variational Autoencoders
    Stahlke, Maximilian
    Kram, Sebastian
    Ott, Felix
    Feigl, Tobias
    Mutschler, Christopher
    [J]. IEEE SENSORS JOURNAL, 2022, 22 (06) : 5133 - 5140
  • [42] Interpretable Variational Autoencoders for Cognitive Models
    Curi, Mariana
    Converse, Geoffrey A.
    Hajewski, Jeff
    Oliveira, Suely
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [43] Variational autoencoders for anomalous jet tagging
    Cheng, Taoli
    Arguin, Jean-Francois
    Leissner-Martin, Julien
    Pilette, Jacinthe
    Golling, Tobias
    [J]. PHYSICAL REVIEW D, 2023, 107 (01)
  • [44] Hierarchical Decompositional Mixtures of Variational Autoencoders
    Tan, Ping Liang
    Peharz, Robert
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [45] DIVA: Domain Invariant Variational Autoencoders
    Ilse, Maximilian
    Tomczak, Jakub M.
    Louizos, Christos
    Welling, Max
    [J]. MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 322 - 348
  • [46] Dynamical Variational Autoencoders: A Comprehensive Review
    Girin, Laurent
    Leglaive, Simon
    Bie, Xiaoyu
    Diard, Julien
    Hueber, Thomas
    Alameda-Pineda, Xavier
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 15 (1-2): : 1 - 175
  • [47] Discovering influential factors in variational autoencoders
    Liu, Shiqi
    Liu, Jingxin
    Zhao, Qian
    Cao, Xiangyong
    Li, Huibin
    Meng, Deyu
    Meng, Hongying
    Liu, Sheng
    [J]. PATTERN RECOGNITION, 2020, 100
  • [48] Scalable Gaussian Process Variational Autoencoders
    Jazbec, Metod
    Ashman, Matthew
    Fortuin, Vincent
    Pearce, Michael
    Mandt, Stephan
    Raetsch, Gunnar
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [49] New Methods for Explainable Variational Autoencoders
    White, Riley
    Baracat-Donovan, Brian
    Helmsen, John
    McCullough, Thomas
    [J]. ARTIFICIAL INTELLIGENCE FOR SECURITY AND DEFENCE APPLICATIONS, 2023, 12742
  • [50] Comment: Variational Autoencoders as Empirical Bayes
    Wang, Yixin
    Miller, Andrew C.
    Blei, David M.
    [J]. STATISTICAL SCIENCE, 2019, 34 (02) : 229 - 233