An interactive multi-head self-attention capsule network model for aspect sentiment classification

被引:0
|
作者
Lina She
Hongfang Gong
Siyu Zhang
机构
[1] Changsha University of Science and Technology,School of Mathematics and Statistics
来源
关键词
Capsule network; Local context mask; Multi-head self-attention; Sentiment classification;
D O I
暂无
中图分类号
学科分类号
摘要
The endpoint of aspect-level sentiment classification, a finely ground categorization task in sentiment analysis, is to gauge the polarity for various aspects in context. However, traditional attentional mechanisms still under-explore the relationship between aspect terms and context, and have difficulty in recognizing the overlapping features that arise when expressing multiple sentiment polarities to efficiently obtain deeper semantic representations. To solve these issues, we propose an interactive multi-head self-attention capsule network model (IMHSACap) for aspect sentiment classification. We design Local Context Mask to attenuate the influence of non-local contexts that are far away from the aspect terms, while expanding the influence of local contexts. Then the long-range intrinsic dependencies of global and local contexts are obtained by the interactive attention mechanism, which consists of two parts, Global2Local and Local2Global. The routing algorithm and activation function of the capsule network are optimized to improve the classification accuracy. Hence, experiments on three publicly available datasets are carried out to demonstrate that the IMHSACap model outperforms other baseline approaches for aspect sentiment classification.
引用
收藏
页码:9327 / 9352
页数:25
相关论文
共 50 条
  • [1] An interactive multi-head self-attention capsule network model for aspect sentiment classification
    She, Lina
    Gong, Hongfang
    Zhang, Siyu
    [J]. JOURNAL OF SUPERCOMPUTING, 2024, 80 (07): : 9327 - 9352
  • [2] Convolutional multi-head self-attention on memory for aspect sentiment classification
    Zhang, Yaojie
    Xu, Bing
    Zhao, Tiejun
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2020, 7 (04) : 1038 - 1044
  • [3] Convolutional Multi-Head Self-Attention on Memory for Aspect Sentiment Classification
    Yaojie Zhang
    Bing Xu
    Tiejun Zhao
    [J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7 (04) : 1038 - 1044
  • [4] Interactive Multi-Head Attention Networks for Aspect-Level Sentiment Classification
    Zhang, Qiuyue
    Lu, Ran
    Wang, Qicai
    Zhu, Zhenfang
    Liu, Peiyu
    [J]. IEEE ACCESS, 2019, 7 : 160017 - 160028
  • [5] Multi-Head Self-Attention Transformation Networks for Aspect-Based Sentiment Analysis
    Lin, Yuming
    Wang, Chaoqiang
    Song, Hao
    Li, You
    [J]. IEEE ACCESS, 2021, 9 : 8762 - 8770
  • [6] Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification
    Luwei Xiao
    Xiaohui Hu
    Yinong Chen
    Yun Xue
    Bingliang Chen
    Donghong Gu
    Bixia Tang
    [J]. Multimedia Tools and Applications, 2022, 81 : 19051 - 19070
  • [7] Multi-head self-attention based gated graph convolutional networks for aspect-based sentiment classification
    Xiao, Luwei
    Hu, Xiaohui
    Chen, Yinong
    Xue, Yun
    Chen, Bingliang
    Gu, Donghong
    Tang, Bixia
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (14) : 19051 - 19070
  • [8] The sentiment analysis model with multi-head self-attention and Tree-LSTM
    Li Lei
    Pei Yijian
    Jin Chenyang
    [J]. SIXTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2021, 11913
  • [9] Multi-head attention model for aspect level sentiment analysis
    Zhang, Xinsheng
    Gao, Teng
    [J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2020, 38 (01) : 89 - 96
  • [10] Capsule Network with Interactive Attention for Aspect-Level Sentiment Classification
    Du, Chunning
    Sun, Haifeng
    Wang, Jingyu
    Qi, Qi
    Liao, Jianxin
    Xu, Tong
    Liu, Ming
    [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5489 - 5498