Symmetric Regularized Sequential Latent Variable Models With Adversarial Neural Networks

被引:0
|
作者
Huang, Jin [1 ]
Xiao, Ming [1 ]
机构
[1] KTH Royal Inst Technol, Div Informat Sci & Engn, S-10044 Stockholm, Sweden
关键词
GAN; latent variable; variational recurrent model; generative model;
D O I
10.1109/TETCI.2024.3398036
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The recurrent neural networks (RNN) with richly distributed internal states and flexible non-linear transition functions, have gradually overtaken the dynamic Bayesian networks in modeling highly structured sequential data. These data, which may come from speech and handwriting, often contain complex relationships between the underlying variational factors such as speaker characteristic and the observed data. The standard RNN model has very limited randomness or variability in its structure, which comes from the output conditional probability model. To improve the variability and performance, we study the new latent variable models with novel regularization methods. This paper will present different ways of using high level latent random variables in RNN to model the variability in the sequential data. We will explore possible ways of using adversarial methods to train a variational RNN model. Through theoretical analysis we show that, contrary to competing approaches our schemes are theoretical optimum in the model training and the symmetric objective function in the adversarial training provides better model training stability. Our approach also improves the posterior approximation in the variational inference network by a separated adversarial training step. Numerical results simulated from TIMIT speech data show that reconstruction loss and evidence lower bound converge to the same level and adversarial training loss converges in a stable course. The results also show our approach of regularization provides stability and smoothness on probability distribution distance minimization between prior and posterior of the latent variables.
引用
收藏
页码:565 / 575
页数:11
相关论文
共 50 条
  • [31] How does the Memorization of Neural Networks Impact Adversarial Robust Models?
    Xu, Han
    Liu, Xiaorui
    Wang, Wentao
    Liu, Zitao
    Jain, Anil K.
    Tang, Jiliang
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2801 - 2812
  • [32] Alternating nonnegative least squares-incorporated regularized symmetric latent factor analysis for undirected weighted networks
    Zhong, Yurong
    Liu, Kechen
    Chen, Jiqiu
    Zhe, Xie
    Li, Weiling
    NEUROCOMPUTING, 2024, 607
  • [33] Dimension in latent variable models
    Levine, MV
    JOURNAL OF MATHEMATICAL PSYCHOLOGY, 2003, 47 (04) : 450 - 466
  • [34] Discrete Latent Variable Models
    Bartolucci, Francesco
    Pandolfi, Silvia
    Pennoni, Fulvia
    ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION, 2022, 9 : 425 - 452
  • [35] Tensors and Latent Variable Models
    Ishteva, Mariya
    LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION, LVA/ICA 2015, 2015, 9237 : 49 - 55
  • [36] CONTINUITY OF LATENT VARIABLE MODELS
    WILLEMS, JC
    NIEUWENHUIS, JW
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 1991, 36 (05) : 528 - 538
  • [37] On Estimation in Latent Variable Models
    Fang, Guanhua
    Li, Ping
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [38] Variable importance in latent variable regression models
    Kvalheim, Olav M.
    Arneberg, Reidar
    Bleie, Olav
    Rajalahti, Tarja
    Smilde, Age K.
    Westerhuis, Johan A.
    JOURNAL OF CHEMOMETRICS, 2014, 28 (08) : 615 - 622
  • [39] Gaussian Latent Variable Models for Variable Selection
    Jiang, Xiubao
    You, Xinge
    Mou, Yi
    Yu, Shujian
    Zeng, Wu
    2014 INTERNATIONAL CONFERENCE ON SECURITY, PATTERN ANALYSIS, AND CYBERNETICS (SPAC), 2014, : 353 - 357
  • [40] Regularized online sequential learning algorithm for single-hidden layer feedforward neural networks
    Hieu Trung Huynh
    Won, Yonggwan
    PATTERN RECOGNITION LETTERS, 2011, 32 (14) : 1930 - 1935