Neural Topic Model with Reinforcement Learning

被引:0
|
作者
Gui, Lin [1 ]
Leng, Jia [2 ]
Pergola, Gabriele [1 ]
Zhou, Yu [1 ]
Xu, Ruifeng [2 ,3 ,4 ]
He, Yulan [1 ]
机构
[1] Univ Warwick, Dept Comp Sci, Coventry, W Midlands, England
[2] Harbin Inst Technol Shenzhen, Shenzhen, Peoples R China
[3] Peng Cheng Lab, Shenzhen, Peoples R China
[4] Joint Lab Harbin Inst Technol & RICOH, Harbin, Peoples R China
基金
中国国家自然科学基金; “创新英国”项目; 欧盟地平线“2020”;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, advances in neural variational inference have achieved many successes in text processing. Examples include neural topic models which are typically built upon variational autoencoder (VAE) with an objective of minimising the error of reconstructing original documents based on the learned latent topic vectors. However, minimising reconstruction errors does not necessarily lead to high quality topics. In this paper, we borrow the idea of reinforcement learning and incorporate topic coherence measures as reward signals to guide the learning of a VAE-based topic model. Furthermore, our proposed model is able to automatically separating background words dynamically from topic words, thus eliminating the pre-processing step of filtering infrequent and/or top frequent words, typically required for learning traditional topic models. Experimental results on the 20 Newsgroups and the NIPS datasets show superior performance both on perplexity and topic coherence measure compared to state-of-the-art neural topic models.
引用
收藏
页码:3478 / 3483
页数:6
相关论文
共 50 条
  • [1] Contrastive Learning for Neural Topic Model
    Thong Nguyen
    Luu Anh Tuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Neural Topic Model with Attention for Supervised Learning
    Wang, Xinyi
    Yang, Yi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108
  • [3] A neural model of hierarchical reinforcement learning
    Rasmussen, Daniel
    Voelker, Aaron
    Eliasmith, Chris
    PLOS ONE, 2017, 12 (07):
  • [4] Reinforcement Learning for Topic Models
    Costello, Jeremy
    Reformat, Marek Z.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4332 - 4351
  • [5] Motor learning model using reinforcement learning with neural internal model
    Izawa, J
    Kondo, T
    Ito, K
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 3146 - 3151
  • [6] An agent reinforcement learning model based on neural networks
    Tang, Liang Gui
    An, Bo
    Cheng, Dai Jie
    BIO-INSPIRED COMPUTATIONAL INTELLIGENCE AND APPLICATIONS, 2007, 4688 : 117 - +
  • [7] Reinforcement learning in a spiking neural model of striatum plasticity
    Gonzalez-Redondo, Alvaro
    Garridoa, Jesus
    Arrabal, Francisco Naveros
    Kotaleski, Jeanette Hellgren
    Grillner, Sten
    Ros, Eduardo
    NEUROCOMPUTING, 2023, 548
  • [8] Entrainable Neural Conversation Model Based on Reinforcement Learning
    Kawano, Seiya
    Mizukami, Masahiro
    Yoshino, Koichiro
    Nakamura, Satoshi
    IEEE ACCESS, 2020, 8 : 178283 - 178294
  • [9] A Motor Learning Neural Model based on Bayesian Network and Reinforcement Learning
    Hosoya, Haruo
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 760 - 767
  • [10] Cycling topic graph learning for neural topic modeling
    Liu, Yanyan
    Gong, Zhiguo
    KNOWLEDGE-BASED SYSTEMS, 2025, 310