Affine Variational Autoencoders

被引:1
|
作者
Bidart, Rene [1 ,2 ]
Wong, Alexander [1 ,2 ]
机构
[1] Waterloo Artificial Intelligence Inst, Waterloo, ON, Canada
[2] Univ Waterloo, Waterloo, ON, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
Deep learning; Variational autoencoders; Image generation; Perturbation;
D O I
10.1007/978-3-030-27202-9_42
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Variational autoencoders (VAEs) have in recent years become one of the most powerful approaches to learning useful latent representations of data in an unsupervised manner. However, a major challenge with VAEs is that they have tremendous difficulty in generalizing to data that deviate from the training set (e.g., perturbed image variants). Normally data augmentation is leveraged to overcome this limitation; however, this is not only computational expensive but also necessitates the construction of more complex models. In this study, we introduce the notion of affine variational autoencoders (AVAEs), which extends upon the conventional VAE architecture through the introduction of affine layers. More specifically, within the AVAE architecture an affine layer perturbs the input image prior to the encoder, and a second affine layer performs an inverse perturbation to the output of the decoder. The parameters of the affine layers are learned to enable the AVAE to encode images at canonical perturbations, resulting in a better reconstruction and a disentangled latent space without the need for data augmentation or the use of more complex models. Experimental results demonstrate the efficacy of the proposed VAE architecture for generalizing to images in the MNIST validation set under affine perturbations without the need for data augmentation, demonstrating significantly reduced loss when compared to conventional VAEs.
引用
收藏
页码:461 / 472
页数:12
相关论文
共 50 条
  • [21] Shedding Light on Variational Autoencoders
    Ruiz Vargas, J. C.
    Novaes, S. F.
    Cobe, R.
    Iope, R.
    Stanzani, S.
    Tomei, T. R.
    [J]. 2018 XLIV LATIN AMERICAN COMPUTER CONFERENCE (CLEI 2018), 2018, : 294 - 298
  • [22] Rethinking Controllable Variational Autoencoders
    Shao, Huajie
    Yang, Yifei
    Lin, Haohong
    Lin, Longzhong
    Chen, Yizhuo
    Yang, Qinmin
    Zhao, Han
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 19228 - 19237
  • [23] A Geometric Perspective on Variational Autoencoders
    Chadebec, Clement
    Allassonniere, Stephanie
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [24] Recursive Inference for Variational Autoencoders
    Kim, Minyoung
    Pavlovic, Vladimir
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [25] EXPLORING VARIATIONAL AUTOENCODERS FOR LEMMATIZATION
    Rebeja, Petru
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL CONFERENCE LINGUISTIC RESOURCES AND TOOLS FOR NATURAL LANGUAGE PROCESSING, 2020, : 77 - 82
  • [26] Variational Autoencoders: A Harmonic Perspective
    Camuto, Alexander
    Willetts, Matthew
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [27] Certifiably Robust Variational Autoencoders
    Barrett, Ben
    Camuto, Alexander
    Willetts, Matthew
    Rainforth, Tom
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [28] Variational Clustering: Leveraging Variational Autoencoders for Image Clustering
    Prasad, Vignesh
    Das, Dipanjan
    Bhowmick, Brojeshwar
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [29] Training Variational Autoencoders with Buffered Stochastic Variational Inference
    Shu, Rui
    Bui, Hung H.
    Whang, Jay
    Ermon, Stefano
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [30] A Survey on Variational Autoencoders in Recommender Systems
    Liang, Shangsong
    Pan, Zhou
    Liu, Wei
    Yin, Jian
    De Rijke, Maarten
    [J]. ACM COMPUTING SURVEYS, 2024, 56 (10)