Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models

被引:1
|
作者
Koksal, Omer [1 ]
机构
[1] ASELSAN, Yapay Zeka & Bilisim Teknol, Ankara, Turkey
关键词
Turkish sentiment analysis; natural language processing; machine learning; pre-trained language models;
D O I
10.1109/SIU53274.2021.9477908
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Sentiment Analysis is a particular natural language processing task that is widely used in obtaining the customer's opinion for a product and measuring customer satisfaction. As with other natural language processing tasks, there are language-specific difficulties in sentiment analysis in Turkish. In this paper, we evaluate and compare various techniques used in Turkish sentiment analysis. We have used pre-trained language models to improve the classification performance of the Turkish sentiment analysis. We compared the results we obtained with previous studies using a data set that was previously used in studies that conducted emotional analysis studies with different techniques. The results of our study showed that we achieved the best classification performance in Turkish sentiment analysis by using pre-educated language models compared to other studies conducted on the data set we used.
引用
收藏
页数:4
相关论文
共 50 条
  • [31] TwitterBERT: Framework for Twitter Sentiment Analysis Based on Pre-trained Language Model Representations
    Azzouza, Noureddine
    Akli-Astouati, Karima
    Ibrahim, Roliana
    [J]. EMERGING TRENDS IN INTELLIGENT COMPUTING AND INFORMATICS: DATA SCIENCE, INTELLIGENT INFORMATION SYSTEMS AND SMART COMPUTING, 2020, 1073 : 428 - 437
  • [32] Automated LOINC Standardization Using Pre-trained Large Language Models
    Tu, Tao
    Loreaux, Eric
    Chesley, Emma
    Lelkes, Adam D.
    Gamble, Paul
    Bellaiche, Mathias
    Seneviratne, Martin
    Chen, Ming-Jun
    [J]. MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
  • [33] Labeling Explicit Discourse Relations Using Pre-trained Language Models
    Kurfali, Murathan
    [J]. TEXT, SPEECH, AND DIALOGUE (TSD 2020), 2020, 12284 : 79 - 86
  • [34] Controlling Translation Formality Using Pre-trained Multilingual Language Models
    Rippeth, Elijah
    Agrawal, Sweta
    Carpuat, Marine
    [J]. PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION (IWSLT 2022), 2022, : 327 - 340
  • [35] Repairing Security Vulnerabilities Using Pre-trained Programming Language Models
    Huang, Kai
    Yang, Su
    Sun, Hongyu
    Sun, Chengyi
    Li, Xuejun
    Zhang, Yuqing
    [J]. 52ND ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOP VOLUME (DSN-W 2022), 2022, : 111 - 116
  • [36] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [37] Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model
    Park, Jeonghyeok
    Zhao, Hai
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2686 - 2694
  • [38] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader
    Xu, Weiwen
    Li, Xin
    Zhang, Wenxuan
    Zhou, Meng
    Lam, Wai
    Si, Luo
    Bing, Lidong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Probing Pre-Trained Language Models for Disease Knowledge
    Alghanmi, Israa
    Espinosa-Anke, Luis
    Schockaert, Steven
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033
  • [40] Pre-trained models for natural language processing: A survey
    Qiu XiPeng
    Sun TianXiang
    Xu YiGe
    Shao YunFan
    Dai Ning
    Huang XuanJing
    [J]. SCIENCE CHINA-TECHNOLOGICAL SCIENCES, 2020, 63 (10) : 1872 - 1897