Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

被引:0
|
作者
Xu, Lu [1 ,2 ]
Chia, Yew Ken [1 ,2 ]
Bing, Lidong [2 ]
机构
[1] Singapore Univ Technol & Design, Singapore, Singapore
[2] Alibaba Grp, DAMO Acad, Hangzhou, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation. Thus, it can make predictions with the semantics of whole spans, ensuring better sentiment consistency. To ease the high computational cost caused by span enumeration, we propose a dual-channel span pruning strategy by incorporating supervision from the Aspect Term Extraction (ATE) and Opinion Term Extraction (OTE) tasks. This strategy not only improves computational efficiency but also distinguishes the opinion and target spans more properly. Our framework simultaneously achieves strong performance for the ASTE as well as ATE and OTE tasks. In particular, our analysis shows that our span-level approach achieves more significant improvements over the baselines on triplets with multi-word targets or opinions.
引用
下载
收藏
页码:4755 / 4766
页数:12
相关论文
共 50 条
  • [31] Aspect Sentiment Triplet Extraction: A Seq2Seq Approach With Span Copy Enhanced Dual Decoder
    Zhang, Zhihao
    Zuo, Yuan
    Wu, Junjie
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2729 - 2742
  • [32] Simple Approach for Aspect Sentiment Triplet Extraction Using Span-Based Segment Tagging and Dual Extractors
    Li, Dongxu
    Yang, Zhihao
    Lan, Yuquan
    Zhang, Yunqi
    Zhao, Hui
    Zhao, Gang
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2374 - 2378
  • [33] A span-based model for aspect terms extraction and aspect sentiment classification
    Yanxia Lv
    Fangna Wei
    Ying Zheng
    Cong Wang
    Cong Wan
    Cuirong Wang
    Neural Computing and Applications, 2021, 33 : 3769 - 3779
  • [34] A span-based model for aspect terms extraction and aspect sentiment classification
    Lv, Yanxia
    Wei, Fangna
    Zheng, Ying
    Wang, Cong
    Wan, Cong
    Wang, Cuirong
    NEURAL COMPUTING & APPLICATIONS, 2021, 33 (08): : 3769 - 3779
  • [35] Enhancing aspect and opinion terms semantic relation for aspect sentiment triplet extraction
    Zhang, Yongsheng
    Ding, Qi
    Zhu, Zhenfang
    Liu, Peiyu
    Xie, Fu
    JOURNAL OF INTELLIGENT INFORMATION SYSTEMS, 2022, 59 (02) : 523 - 542
  • [36] Enhancing aspect and opinion terms semantic relation for aspect sentiment triplet extraction
    Yongsheng Zhang
    Qi Ding
    Zhenfang Zhu
    Peiyu Liu
    Fu Xie
    Journal of Intelligent Information Systems, 2022, 59 : 523 - 542
  • [37] Position-Aware Tagging for Aspect Sentiment Triplet Extraction
    Xu, Lu
    Li, Hao
    Lu, Wei
    Bing, Lidong
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2339 - 2349
  • [38] Fusing semantic and syntactic information for aspect sentiment triplet extraction
    Su, Na
    Wang, Anqi
    Zhang, Lingzhi
    Journal of Intelligent and Fuzzy Systems, 2024, 47 (3-4): : 235 - 244
  • [39] A semantically enhanced dual encoder for aspect sentiment triplet extraction
    Jiang, Baoxing
    Liang, Shehui
    Liu, Peiyu
    Dong, Kaifang
    Li, Hongye
    NEUROCOMPUTING, 2023, 562
  • [40] Multiscale feature aggregation network for aspect sentiment triplet extraction
    Zhu, Linan
    Xu, Minhao
    Zhu, Zhechao
    Xu, Yifei
    Kong, Xiangjie
    APPLIED INTELLIGENCE, 2023, 53 (14) : 17762 - 17777