A Transformer-Based Variational Autoencoder for Sentence Generation

被引:24
|
作者
Liu, Danyang [1 ]
Liu, Gongshen [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
variational autoencoder; text generation; self-attention; transformer;
D O I
10.1109/ijcnn.2019.8852155
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The variational autoencoder(VAE) has been proved to be a most efficient generative model, but its applications in natural language tasks have not been fully developed. A novel variational autoencoder for natural texts generation is presented in this paper. Compared to the previously introduced variational autoencoder for natural text where both the encoder and decoder are RNN-based, we propose a new transformer-based architecture and augment the decoder with an LSTM language model layer to fully exploit information of latent variables. We also propose some methods to deal with problems during training time, such as KL divergency collapsing and model degradation. In the experiment, we use random sampling and linear interpolation to test our model. Results show that the generated sentences by our approach are more meaningful and the semantics are more coherent in the latent space.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] A Transformer-Based Hierarchical Variational AutoEncoder Combined Hidden Markov Model for Long Text Generation
    Zhao, Kun
    Ding, Hongwei
    Ye, Kai
    Cui, Xiaohui
    [J]. ENTROPY, 2021, 23 (10)
  • [2] Adaptive Transformer-Based Conditioned Variational Autoencoder for Incomplete Social Event Classification
    Li, Zhangming
    Qian, Shengsheng
    Cao, Jie
    Fang, Quan
    Xu, Changsheng
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1698 - 1707
  • [3] T-CVAE: Transformer-Based Conditioned Variational Autoencoder for Story Completion
    Wang, Tianming
    Wan, Xiaojun
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5233 - 5239
  • [4] Emotional Dialogue Generation Based on Transformer and Conditional Variational Autoencoder
    Lin, Hongquan
    Deng, Zhenrong
    [J]. 2022 IEEE 21ST INTERNATIONAL CONFERENCE ON UBIQUITOUS COMPUTING AND COMMUNICATIONS, IUCC/CIT/DSCI/SMARTCNS, 2022, : 386 - 393
  • [5] Unsupervised Anomaly Detection in Multivariate Time Series through Transformer-based Variational Autoencoder
    Zhang, Hongwei
    Xia, Yuanqing
    Yan, Tijin
    Liu, Guiyang
    [J]. PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 281 - 286
  • [6] Sentiment-Oriented Transformer-Based Variational Autoencoder Network for Live Video Commenting
    Fu, Fengyi
    Fang, Shancheng
    Chen, Weidong
    Mao, Zhendong
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (04)
  • [7] Latent Space Expanded Variational Autoencoder for Sentence Generation
    Song, Tianbao
    Sun, Jingbo
    Chen, Bo
    Peng, Weiming
    Song, Jihua
    [J]. IEEE ACCESS, 2019, 7 : 144618 - 144627
  • [8] Efficient Transformer-Based Sentence Encoding for Sentence Pair Modelling
    Ahmed, Mahtab
    Mercer, Robert E.
    [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, 11489 : 146 - 159
  • [9] Topic-word-constrained sentence generation with variational autoencoder
    Song, Tianbao
    Sun, Jingbo
    Liu, Xin
    Song, Jihua
    Peng, Weiming
    [J]. PATTERN RECOGNITION LETTERS, 2022, 160 : 148 - 154
  • [10] De Novo Generation of Chemical Structures of Inhibitor and Activator Candidates for Therapeutic Target Proteins by a Transformer-Based Variational Autoencoder and Bayesian Optimization
    Matsukiyo, Yuki
    Yamanaka, Chikashige
    Yamanishi, Yoshihiro
    [J]. JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2023, 64 (07) : 2345 - 2355