Pre-Trained Language Model-Based Deep Learning for Sentiment Classification of Vietnamese Feedback

被引:0
|
作者
Loc, Cu Vinh [1 ]
Viet, Truong Xuan [1 ]
Viet, Tran Hoang [1 ]
Thao, Le Hoang [1 ]
Viet, Nguyen Hoang [1 ]
机构
[1] Can Tho Univ, Software Ctr, Can Tho city, Vietnam
关键词
Sentiment analysis; PhoBERT; deep learning; text classification; Vietnamese feedback;
D O I
10.1142/S1469026823500165
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, with the strong and outstanding development of the Internet, the need to refer to the feedback of previous customers when shopping online is increasing. Therefore, websites are developed to allow users to share experiences, reviews, comments and feedback about the services and products of businesses and organizations. The organizations also collect user feedback about their products and services to give better directions. However, with a large amount of user feedback about certain services and products, it is difficult for users, businesses, and organizations to pay attention to them all. Thus, an automatic system is necessary to analyze the sentiment of a customer feedback. Recently, the well-known pre-trained language models for Vietnamese (PhoBERT) achieved high performance in comparison with other approaches. However, this method may not focus on the local information in the text like phrases or fragments. In this paper, we propose a Convolutional Neural Network (CNN) model based on PhoBERT for sentiment classification. The output of contextualized embeddings of the PhoBERT's last four layers is fed into the CNN. This makes the network capable of obtaining more local information from the sentiment. Besides, the PhoBERT output is also given to the transformer encoder layers in order to employ the self-attention technique, and this also makes the model more focused on the important information of the sentiment segments. The experimental results demonstrate that the proposed approach gives competitive performance compared to the existing studies on three public datasets with the opinions of Vietnamese people.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Software Vulnerabilities Detection Based on a Pre-trained Language Model
    Xu, Wenlin
    Li, Tong
    Wang, Jinsong
    Duan, Haibo
    Tang, Yahui
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 904 - 911
  • [42] Pre-trained Language Model based Ranking in Baidu Search
    Zou, Lixin
    Zhang, Shengqiang
    Cai, Hengyi
    Ma, Dehong
    Cheng, Suqi
    Wang, Shuaiqiang
    Shi, Daiting
    Cheng, Zhicong
    Yin, Dawei
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 4014 - 4022
  • [43] Interpretability of Entity Matching Based on Pre-trained Language Model
    Liang Z.
    Wang H.-Z.
    Dai J.-J.
    Shao X.-Y.
    Ding X.-O.
    Mu T.-Y.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (03): : 1087 - 1108
  • [44] ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining
    Minh Phuc Nguyen
    Vu Hoang Tran
    Vu Hoang
    Ta Duc Huy
    Bui, Trung H.
    Truong, Steven Q. H.
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 328 - 337
  • [45] Syntax-guided Contrastive Learning for Pre-trained Language Model
    Zhang, Shuai
    Wang, Lijie
    Xiao, Xinyan
    Wu, Hua
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
  • [46] Kurdish Sign Language Recognition Using Pre-Trained Deep Learning Models
    Alsaud, Ali A.
    Yousif, Raghad Z.
    Aziz, Marwan. M.
    Kareem, Shahab W.
    Maho, Amer J.
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (06) : 1334 - 1344
  • [47] Integration of pre-trained protein language models into geometric deep learning networks
    Fang Wu
    Lirong Wu
    Dragomir Radev
    Jinbo Xu
    Stan Z. Li
    Communications Biology, 6
  • [48] Integration of pre-trained protein language models into geometric deep learning networks
    Wu, Fang
    Wu, Lirong
    Radev, Dragomir
    Xu, Jinbo
    Li, Stan Z.
    COMMUNICATIONS BIOLOGY, 2023, 6 (01)
  • [49] Rule-based Natural Language Processing Approach to Detect Delirium on a Pre-Trained Deep Learning Model Framework
    Munoz, Ricardo
    Hua, Yining
    Seibold, Eva-Lotte
    Ahrens, Elena
    Redaelli, Simone
    Suleiman, Aiman
    von Wedel, Dario
    Ashrafian, Sarah
    Chen, Guanqing
    Schaefer, Maximilian
    Ma, Haobo
    ANESTHESIA AND ANALGESIA, 2023, 136 : 1028 - 1030
  • [50] Tomato crop disease classification using pre-trained deep learning algorithm
    Rangarajan, Aravind Krishnaswamy
    Purushothaman, Raja
    Ramesh, Aniirudh
    INTERNATIONAL CONFERENCE ON ROBOTICS AND SMART MANUFACTURING (ROSMA2018), 2018, 133 : 1040 - 1047