A Robust User Sentiment Biterm Topic Mixture Model Based on User Aggregation Strategy to Avoid Data Sparsity for Short Text

被引:0
|
作者
Nimala K
Jebakumar R
机构
[1] SRM University,School of Computing
关键词
Sentiment; Topic modelling; Biterm; Short text; Sparsity;
D O I
暂无
中图分类号
学科分类号
摘要
Sentiment analysis is a process of computationally finding the opinions that are expressed in a short text or a feedback by a writer towards a particular topic, product, service. The short piece of review from the user can help a business determine or understand the attitude of the user thereby predict the customer’s behaviour and itsubstantiallyimproves the quality of service parameters. The proposed Robust User Sentiment Biterm Topic Mixture (RUSBTM)model discovers the user preference and their sentiment orientation views for effective Topic Modelling using Biterms or word-pair from the short text of a particular venue. Since short review or text suffers from data sparse, the user aggregation strategy is adapted to form a pseudo document and the word pairset is created for the whole corpus. The RUSBTM learns topics by generating the word co-occurrence patterns thereby inferring topics with rich corpus-level information. By analysing the sentiments of the paired words and their corresponding topics in the review corpus of the particular venue, prediction can be done that exactly portrays the user interest, preference and expectation from a particular venue. The RUSBTM model proved to be more robust and also, the extracted topics are more coherent and informative. Also the method uses accurate sentiment polarity techniques to exactly capture the sentiment orientation and the model proves to be outperforming better when compared to other state of art methods.
引用
收藏
相关论文
共 25 条
  • [1] A Robust User Sentiment Biterm Topic Mixture Model Based on User Aggregation Strategy to Avoid Data Sparsity for Short Text
    Nimala, K.
    Jebakumar, R.
    JOURNAL OF MEDICAL SYSTEMS, 2019, 43 (04)
  • [2] User Based Aggregation for Biterm Topic Model
    Chen, Weizheng
    Wang, Jinpeng
    Zhang, Yan
    Yan, Hongfei
    Li, Xiaoming
    PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL) AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (IJCNLP), VOL 2, 2015, : 489 - 494
  • [3] Dataless Short Text Classification Based on Biterm Topic Model and Word Embeddings
    Yang, Yi
    Wang, Hongan
    Zhu, Jiaqi
    Wu, Yunkun
    Jiang, Kailong
    Guo, Wenli
    Shi, Wandong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3969 - 3975
  • [4] A Dirichlet process biterm-based mixture model for short text stream clustering
    Chen, Junyang
    Gong, Zhiguo
    Liu, Weiwen
    APPLIED INTELLIGENCE, 2020, 50 (05) : 1609 - 1619
  • [5] A Dirichlet process biterm-based mixture model for short text stream clustering
    Junyang Chen
    Zhiguo Gong
    Weiwen Liu
    Applied Intelligence, 2020, 50 : 1609 - 1619
  • [6] Online Biterm Topic Model based short text stream classification using short text expansion and concept drifting detection
    Hu, Xuegang
    Wang, Haiyan
    Li, Peipei
    PATTERN RECOGNITION LETTERS, 2018, 116 : 187 - 194
  • [7] User clustering in a dynamic social network topic model for short text streams
    Qiu, Zhangcheng
    Shen, Hong
    INFORMATION SCIENCES, 2017, 414 : 102 - 116
  • [8] User group based emotion detection and topic discovery over short text
    Jiachun Feng
    Yanghui Rao
    Haoran Xie
    Fu Lee Wang
    Qing Li
    World Wide Web, 2020, 23 : 1553 - 1587
  • [9] User group based emotion detection and topic discovery over short text
    Feng, Jiachun
    Rao, Yanghui
    Xie, Haoran
    Wang, Fu Lee
    Li, Qing
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (03): : 1553 - 1587
  • [10] SUIT: A Supervised User-Item Based Topic Model for Sentiment Analysis
    Li, Fangtao
    Wang, Sheng
    Liu, Shenghua
    Zhang, Ming
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 1636 - 1642