Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers

被引:14
|
作者
Wang, Yu [1 ]
Dai, Bin [2 ]
Hua, Gang [3 ]
Aston, John [1 ]
Wipf, David [3 ]
机构
[1] Univ Cambridge, Dept Pure Math & Stat, Cambridge CB2 1TN, England
[2] Tsinghua Univ, Beijing 100084, Peoples R China
[3] Microsoft Res, Redmond, WA 98052 USA
基金
英国工程与自然科学研究理事会;
关键词
Deep generative models; variational autoencoder; robust PCA; outlier removal; variational Bayesian model; deep learning; SPARSE;
D O I
10.1109/JSTSP.2018.2876995
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a second, complementary alteration is proposed to further build invariance to contaminated or dirty samples via a data augmentation process that amounts to recycling. In brief, to the extent that the VAE is legitimately a representative generative model, then each output from the decoder should closely resemble an authentic sample, which can then be resubmitted as a novel input ad infinitum. Moreover, this can be accomplished via special recurrent connections without the need for additional parameters to be trained. We evaluate these proposals on multiple practical outlier-removal and generative modeling tasks involving nonlinear low-dimensional manifolds, demonstrating considerable improvements over existing algorithms.
引用
收藏
页码:1615 / 1627
页数:13
相关论文
共 50 条
  • [1] Green Generative Modeling: Recycling Dirty Data using Recurrent Variational Autoencoders
    Wang, Yu
    Dai, Bin
    Hua, Gang
    Aston, John
    Wipf, David
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI2017), 2017,
  • [2] About Generative Aspects of Variational Autoencoders
    Asperti, Andrea
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, 2019, 11943 : 71 - 82
  • [3] A Generative Model For Zero Shot Learning Using Conditional Variational Autoencoders
    Mishra, Ashish
    Reddy, Shiva Krishna
    Mittal, Anurag
    Murthy, Hema A.
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 2269 - 2277
  • [4] Deep Generative Models for Image Generation: A Practical Comparison Between Variational Autoencoders and Generative Adversarial Networks
    El-Kaddoury, Mohamed
    Mahmoudi, Abdelhak
    Himmi, Mohammed Majid
    MOBILE, SECURE, AND PROGRAMMABLE NETWORKING, 2019, 11557 : 1 - 8
  • [5] Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models
    Shi, Yuge
    Siddharth, N.
    Paige, Brooks
    Torr, Philip H. S.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Integrating imprecise data in generative models using interval-valued Variational Autoencoders
    Sánchez, Luciano
    Costa, Nahuel
    Couso, Inés
    Strauss, Olivier
    Information Fusion, 2025, 114
  • [7] Combination of Variational Autoencoders and Generative Adversarial Network into an Unsupervised Generative Model
    Almalki, Ali Jaber
    Wocjan, Pawel
    ADVANCES IN ARTIFICIAL INTELLIGENCE AND APPLIED COGNITIVE COMPUTING, 2021, : 101 - 110
  • [8] Robust contrastive learning and nonlinear ICA in the presence of outliers
    Sasaki, Hiroaki
    Takenouchi, Takashi
    Monti, Ricardo
    Hyvarinen, Aapo
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 659 - 668
  • [9] Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
    Mescheder, Lars
    Nowozin, Sebastian
    Geiger, Andreas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [10] Improving Generative and Discriminative Modelling Performance by Implementing Learning Constraints in Encapsulated Variational Autoencoders
    Bai, Wenjun
    Quan, Changqin
    Luo, Zhi-Wei
    APPLIED SCIENCES-BASEL, 2019, 9 (12):