Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models

被引:1
|
作者
Koksal, Omer [1 ]
机构
[1] ASELSAN, Yapay Zeka & Bilisim Teknol, Ankara, Turkey
关键词
Turkish sentiment analysis; natural language processing; machine learning; pre-trained language models;
D O I
10.1109/SIU53274.2021.9477908
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Sentiment Analysis is a particular natural language processing task that is widely used in obtaining the customer's opinion for a product and measuring customer satisfaction. As with other natural language processing tasks, there are language-specific difficulties in sentiment analysis in Turkish. In this paper, we evaluate and compare various techniques used in Turkish sentiment analysis. We have used pre-trained language models to improve the classification performance of the Turkish sentiment analysis. We compared the results we obtained with previous studies using a data set that was previously used in studies that conducted emotional analysis studies with different techniques. The results of our study showed that we achieved the best classification performance in Turkish sentiment analysis by using pre-educated language models compared to other studies conducted on the data set we used.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] LaoPLM: Pre-trained Language Models for Lao
    Lin, Nankai
    Fu, Yingwen
    Yang, Ziyu
    Chen, Chuwei
    Jiang, Shengyi
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6506 - 6512
  • [22] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    [J]. 2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246
  • [23] PhoBERT: Pre-trained language models for Vietnamese
    Dat Quoc Nguyen
    Anh Tuan Nguyen
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1037 - 1042
  • [24] Knowledge Inheritance for Pre-trained Language Models
    Qin, Yujia
    Lin, Yankai
    Yi, Jing
    Zhang, Jiajie
    Han, Xu
    Zhang, Zhengyan
    Su, Yusheng
    Liu, Zhiyuan
    Li, Peng
    Sun, Maosong
    Zhou, Jie
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3921 - 3937
  • [25] Evaluating Commonsense in Pre-Trained Language Models
    Zhou, Xuhui
    Zhang, Yue
    Cui, Leyang
    Huang, Dandan
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9733 - 9740
  • [26] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [27] Probing for Hyperbole in Pre-Trained Language Models
    Schneidermann, Nina Skovgaard
    Hershcovich, Daniel
    Pedersen, Bolette Sandford
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
  • [28] Roman Urdu Sentiment Analysis Using Pre-trained DistilBERT and XLNet
    Azhar, Nikhar
    Latif, Seemab
    [J]. 2022 FIFTH INTERNATIONAL CONFERENCE OF WOMEN IN DATA SCIENCE AT PRINCE SULTAN UNIVERSITY (WIDS-PSU 2022), 2022, : 75 - 78
  • [29] A Comparative Study of Different Pre-trained Language Models for Sentiment Analysis of Human-Computer Negotiation Dialogue
    Dong, Jing
    Luo, Xudong
    Zhu, Junlin
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT IV, KSEM 2024, 2024, 14887 : 301 - 317
  • [30] The Biases of Pre-Trained Language Models: An Empirical Study on Prompt-Based Sentiment Analysis and Emotion Detection
    Mao, Rui
    Liu, Qian
    He, Kai
    Li, Wei
    Cambria, Erik
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 1743 - 1753