Multi-perspective contrastive learning framework guided by sememe knowledge and label information for sarcasm detection

被引:5
|
作者
Wen, Zhiyuan [1 ,3 ]
Wang, Rui [1 ,3 ]
Luo, Xuan [1 ,3 ]
Wang, Qianlong [1 ,3 ]
Liang, Bin [1 ,3 ]
Du, Jiachen [1 ,3 ]
Yu, Xiaoqi [5 ]
Gui, Lin [2 ]
Xu, Ruifeng [1 ,3 ,4 ]
机构
[1] Harbin Inst Technol Shenzhen, Joint Lab HITSZ CMS, Shenzhen 518055, Guangdong, Peoples R China
[2] Kings Coll London, London, England
[3] Guangdong Prov Key Lab Novel Secur Intelligence T, Shenzhen 518000, Guangdong, Peoples R China
[4] Peng Cheng Lab, Shenzhen 518000, Guangdong, Peoples R China
[5] China Merchants Secur Co Ltd, Shenzhen 518000, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Sarcasm detection; Contrastive learning; Sememe knowledge; Deep learning; IRONY; MODEL;
D O I
10.1007/s13042-023-01884-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sarcasm is a prevailing rhetorical device that intentionally uses words that literally meaning opposite the real meaning. Due to this deliberate ambiguity, accurately detecting sarcasm can encourage the comprehension of users' real intentions. Therefore, sarcasm detection is a critical and challenging task for sentiment analysis. In previous research, neural network-based models are generally unsatisfactory when dealing with complex sarcastic expressions. To ameliorate this situation, we propose a multi-perspective contrastive learning framework for sarcasm detection, called SLGC, which is guided by sememe knowledge and label information based on the pre-trained neural model. For the in-instance perspective, we leverage the sememe, the minimum meaning unit, to guide the contrastive learning to produce high-quality sentence representations. For the between-instance perspective, we utilize label information to guide contrastive learning to mine potential interaction relationships between sarcastic expressions. Experiments on two public benchmark sarcasm detection dataset demonstrate that our approach significantly outperforms the current state-of-the-art model.
引用
收藏
页码:4119 / 4134
页数:16
相关论文
共 50 条
  • [31] What are the key drivers to promote continuance intention of undergraduates in online learning? A multi-perspective framework
    Zhang, Jintao
    Zhang, Mingbo
    Liu, Yanming
    Zhang, Liqin
    FRONTIERS IN PSYCHOLOGY, 2023, 14
  • [32] What are the Key Drivers to Promote Continuance Intention of Undergraduates in Mobile Learning? A Multi-perspective Framework
    Li, Li
    SAGE OPEN, 2024, 14 (04):
  • [33] A multi-perspective information aggregation network for automated T-staging detection of nasopharyngeal carcinoma
    Liang, Shujun
    Dong, Xiuyu
    Yang, Kaifan
    Chu, Zhiqin
    Tang, Fan
    Ye, Feng
    Chen, Bei
    Guan, Jian
    Zhang, Yu
    PHYSICS IN MEDICINE AND BIOLOGY, 2022, 67 (24):
  • [34] 3D Multi-perspective Depth Detection Using Point Clouds and Machine Learning
    Esteves, Andrew
    Bickford, Harry
    Yang, Jaesung
    Shen, Xin
    Sohn, Kiwon
    THREE-DIMENSIONAL IMAGING, VISUALIZATION, AND DISPLAY 2024, 2024, 13041
  • [35] MPASL: multi-perspective learning knowledge graph attention network for synthetic lethality prediction in human cancer
    Zhang, Ge
    Chen, Yitong
    Yan, Chaokun
    Wang, Jianlin
    Liang, Wenjuan
    Luo, Junwei
    Luo, Huimin
    FRONTIERS IN PHARMACOLOGY, 2024, 15
  • [36] Research on Joint Extraction Method of Elevator Safety Risk Control Knowledge Based on Multi-Perspective Learning
    Hao, Suli
    Shi, Fenfen
    IEEE ACCESS, 2024, 12 : 159488 - 159502
  • [37] SeBot: Structural Entropy Guided Multi-View Contrastive Learning for Social Bot Detection
    Yang, Yingguang
    Wu, Qi
    He, Buyun
    Peng, Hao
    Yang, Renyu
    Hao, Zhifeng
    Liao, Yong
    PROCEEDINGS OF THE 30TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2024, 2024, : 3841 - 3852
  • [38] An information fusion framework with multi-channel feature concatenation and multi-perspective system combination for the deep-learning-based robust recognition of microphone array speech
    Tu, Dxxyan-Hui
    Du, Jun
    Wang, Qing
    Bao, Xiao
    Dai, Li-Rong
    Lee, Chi-Hui
    COMPUTER SPEECH AND LANGUAGE, 2017, 46 : 517 - 534
  • [39] KRL_MLCCL: Multi-label classification based on contrastive learning for knowledge representation learning under open world
    Suo, Xinhua
    Guo, Bing
    Shen, Yan
    Chen, Yaosen
    Wang, Wei
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (05)
  • [40] Label-enhanced Prototypical Network with Contrastive Learning for Multi-label Few-shot Aspect Category Detection
    Liu, Han
    Zhang, Feng
    Zhang, Xiaotong
    Zhao, Siyang
    Sun, Junjie
    Yu, Hong
    Zhang, Xianchao
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1079 - 1087