Symmetric Regularized Sequential Latent Variable Models With Adversarial Neural Networks

被引:0
|
作者
Huang, Jin [1 ]
Xiao, Ming [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, S-10044 Stockholm, Sweden
关键词
GAN; latent variable; variational recurrent model; generative model;
D O I
10.1109/TETCI.2024.3398036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recurrent neural networks (RNN) with richly distributed internal states and flexible non-linear transition functions, have gradually overtaken the dynamic Bayesian networks in modeling highly structured sequential data. These data, which may come from speech and handwriting, often contain complex relationships between the underlying variational factors such as speaker characteristic and the observed data. The standard RNN model has very limited randomness or variability in its structure, which comes from the output conditional probability model. To improve the variability and performance, we study the new latent variable models with novel regularization methods. This paper will present different ways of using high level latent random variables in RNN to model the variability in the sequential data. We will explore possible ways of using adversarial methods to train a variational RNN model. Through theoretical analysis we show that, contrary to competing approaches our schemes are theoretical optimum in the model training and the symmetric objective function in the adversarial training provides better model training stability. Our approach also improves the posterior approximation in the variational inference network by a separated adversarial training step. Numerical results simulated from TIMIT speech data show that reconstruction loss and evidence lower bound converge to the same level and adversarial training loss converges in a stable course. The results also show our approach of regularization provides stability and smoothness on probability distribution distance minimization between prior and posterior of the latent variables.
引用
收藏
页码:565 / 575
页数:11
相关论文
共 50 条
  • [1] Regularized Sequential Latent Variable Models with Adversarial Neural Networks
    Huang, Jin
    Xiao, Ming
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 834 - 839
  • [2] Bayesian Regularized Multivariate Generalized Latent Variable Models
    Feng, Xiang-Nan
    Wu, Hao-Tian
    Song, Xin-Yuan
    STRUCTURAL EQUATION MODELING-A MULTIDISCIPLINARY JOURNAL, 2017, 24 (03) : 341 - 358
  • [3] Adding Regularized Horseshoes to the Dynamics of Latent Variable Models
    Binding, Garret
    Koc, Piotr
    POLITICAL ANALYSIS, 2025,
  • [4] Improving sequential latent variable models with autoregressive flows
    Joseph Marino
    Lei Chen
    Jiawei He
    Stephan Mandt
    Machine Learning, 2022, 111 : 1597 - 1620
  • [5] Improving sequential latent variable models with autoregressive flows
    Marino, Joseph
    Chen, Lei
    He, Jiawei
    Mandt, Stephan
    MACHINE LEARNING, 2022, 111 (04) : 1597 - 1620
  • [6] Sequential Dynamic Classification Using Latent Variable Models
    Lee, Seung Min
    Roberts, Stephen J.
    COMPUTER JOURNAL, 2010, 53 (09): : 1415 - 1429
  • [7] Improving Sequential Latent Variable Models with Autoregressive Flows
    Marino, Joseph
    Chen, Lei
    He, Jiawei
    Mandt, Stephan
    SYMPOSIUM ON ADVANCES IN APPROXIMATE BAYESIAN INFERENCE, VOL 118, 2019, 118
  • [8] Biosignal Generation and Latent Variable Analysis With Recurrent Generative Adversarial Networks
    Harada, Shota
    Hayashi, Hideaki
    Uchida, Seiichi
    IEEE ACCESS, 2019, 7 : 144292 - 144302
  • [9] Improving classification with latent variable models by sequential constraint optimization
    Westerdijk, M
    Wiegerinck, W
    NEUROCOMPUTING, 2004, 56 : 167 - 185
  • [10] Recruitment Market Trend Analysis with Sequential Latent Variable Models
    Zhu, Chen
    Zhu, Hengshu
    Xiong, Hui
    Ding, Pengliang
    Xie, Fang
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 383 - 392