Detecting Sarcasm in Conversation Context Using Transformer-Based Models

被引:0
|
作者
Avvaru, Adithya [1 ,2 ]
Vobilisetty, Sanath [2 ]
Mamidi, Radhika [1 ]
机构
[1] Int Inst Informat Technol, Hyderabad, India
[2] Teradata India Pvt Ltd, Mumbai, Maharashtra, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sarcasm detection, regarded as one of the subproblems of sentiment analysis, is a very typical task because the introduction of sarcastic words can flip the sentiment of the sentence itself. To date, many research works revolve around detecting sarcasm in one single sentence and there is very limited research to detect sarcasm resulting from multiple sentences. Current models used Long Short Term Memory (Hochreiter and Schmidhuber, 1997) (LSTM) variants with or without attention to detect sarcasm in conversations. We showed that the models using state-of-the-art Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) (BERT), to capture syntactic and semantic information across conversation sentences, performed better than the current models. Based on the data analysis, we estimated that the number of sentences in the conversation that can contribute to the sarcasm and the results agrees to this estimation. We also perform a comparative study of our different versions of BERT-based model with other variants of LSTM model and XLNet (Yang et al, 2019) (both using the estimated number of conversation sentences) and find out that BERT-based models outperformed them.
引用
收藏
页码:98 / 103
页数:6
相关论文
共 50 条
  • [21] The Role of Conversation Context for Sarcasm Detection in Online Interactions
    Ghosh, Debanjan
    Fabbri, Alexander Richard
    Muresan, Smaranda
    [J]. 18TH ANNUAL MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE (SIGDIAL 2017), 2017, : 186 - 196
  • [22] Augmenting Data for Sarcasm Detection with Unlabeled Conversation Context
    Lee, Hankyol
    Yu, Youngjae
    Kim, Gunhee
    [J]. FIGURATIVE LANGUAGE PROCESSING, 2020, : 12 - 17
  • [23] Transformer-Based Deep Learning for Sarcasm Detection with Imbalanced Dataset: Resampling Techniques with Downsampling and Augmentation
    Abdullah, Malak
    Khrais, Jumana
    Swedat, Safa
    [J]. 2022 13TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS), 2022, : 294 - 300
  • [24] Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
    Prakash, Prafull
    Shashidhar, Saurabh Kumar
    Zhao, Wenlong
    Rongali, Subendhu
    Khan, Haidar
    Kayser, Michael
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4711 - 4717
  • [25] Object detection using convolutional neural networks and transformer-based models: a review
    Shrishti Shah
    Jitendra Tembhurne
    [J]. Journal of Electrical Systems and Information Technology, 10 (1)
  • [26] Classifying Drug Ratings Using User Reviews with Transformer-Based Language Models
    Shiju, Akhil
    He, Zhe
    [J]. 2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 163 - 169
  • [27] Empirical Study of Tweets Topic Classification Using Transformer-Based Language Models
    Mandal, Ranju
    Chen, Jinyan
    Becken, Susanne
    Stantic, Bela
    [J]. INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2021, 2021, 12672 : 340 - 350
  • [28] Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models
    Evangelatos, Pavlos
    Iliou, Christos
    Mavropoulos, Thanassis
    Apostolou, Konstantinos
    Tsikrika, Theodora
    Vrochidis, Stefanos
    Kompatsiaris, Ioannis
    [J]. PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 348 - 353
  • [29] Transformer-based Models for Arabic Online Handwriting Recognition
    Alwajih, Fakhraddin
    Badr, Eman
    Abdou, Sherif
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (05) : 898 - 905
  • [30] Ouroboros: On Accelerating Training of Transformer-Based Language Models
    Yang, Qian
    Huo, Zhouyuan
    Wang, Wenlin
    Huang, Heng
    Carin, Lawrence
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32