Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive Summarization Approach

被引:15
|
作者
Bacco, Luca [1 ,2 ]
Cimino, Andrea [2 ]
Dell'Orletta, Felice [2 ]
Merone, Mario [1 ]
机构
[1] Univ Campus Biomed Roma, Unit Comp Syst & Bioinformat, Dept Engn, I-00128 Rome, Italy
[2] Ist Linguist Computaz Antonio Zampolli ILC CNR, ItaliaNLP Lab, I-56124 Pisa, Italy
关键词
sentiment analysis; explainability; hierarchical transformers; extractive summarization;
D O I
10.3390/electronics10182195
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, the explainable artificial intelligence (XAI) paradigm is gaining wide research interest. The natural language processing (NLP) community is also approaching the shift of paradigm: building a suite of models that provide an explanation of the decision on some main task, without affecting the performances. It is not an easy job for sure, especially when very poorly interpretable models are involved, like the almost ubiquitous (at least in the NLP literature of the last years) transformers. Here, we propose two different transformer-based methodologies exploiting the inner hierarchy of the documents to perform a sentiment analysis task while extracting the most important (with regards to the model decision) sentences to build a summary as the explanation of the output. For the first architecture, we placed two transformers in cascade and leveraged the attention weights of the second one to build the summary. For the other architecture, we employed a single transformer to classify the single sentences in the document and then combine the probability scores of each to perform the classification and then build the summary. We compared the two methodologies by using the IMDB dataset, both in terms of classification and explainability performances. To assess the explainability part, we propose two kinds of metrics, based on benchmarking the models' summaries with human annotations. We recruited four independent operators to annotate few documents retrieved from the original dataset. Furthermore, we conducted an ablation study to highlight how implementing some strategies leads to important improvements on the explainability performance of the cascade transformers model.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] An Explainable CNN and Vision Transformer-Based Approach for Real-Time Food Recognition
    Nfor, Kintoh Allen
    Theodore Armand, Tagne Poupi
    Ismaylovna, Kenesbaeva Periyzat
    Joo, Moon-Il
    Kim, Hee-Cheol
    NUTRIENTS, 2025, 17 (02)
  • [22] TEDT: Transformer-Based Encoding–Decoding Translation Network for Multimodal Sentiment Analysis
    Fan Wang
    Shengwei Tian
    Long Yu
    Jing Liu
    Junwen Wang
    Kun Li
    Yongtao Wang
    Cognitive Computation, 2023, 15 : 289 - 303
  • [23] TMBL: Transformer-based multimodal binding learning model for multimodal sentiment analysis
    Huang, Jiehui
    Zhou, Jun
    Tang, Zhenchao
    Lin, Jiaying
    Chen, Calvin Yu-Chian
    KNOWLEDGE-BASED SYSTEMS, 2024, 285
  • [24] Unsupervised extractive opinion summarization based on text simplification and sentiment guidance
    Wang, Rui
    Lan, Tian
    Wu, Zufeng
    Liu, Leyuan
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 272
  • [25] Transformer-based deep learning models for the sentiment analysis of social media data
    Kokab, Sayyida Tabinda
    Asghar, Sohail
    Naz, Shehneela
    ARRAY, 2022, 14
  • [26] Transformer-based Hierarchical Encoder for Document Classification
    Sakhrani, Harsh
    Parekh, Saloni
    Ratadiya, Pratik
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 852 - 858
  • [27] Enhancing the accuracy of transformer-based embeddings for sentiment analysis in social big data
    Zemzem, Wiem
    Tagina, Moncef
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2023, 73 (03) : 169 - 177
  • [28] Automatic text summarization using transformer-based language models
    Rao, Ritika
    Sharma, Sourabh
    Malik, Nitin
    INTERNATIONAL JOURNAL OF SYSTEM ASSURANCE ENGINEERING AND MANAGEMENT, 2024, 15 (06) : 2599 - 2605
  • [29] Hierarchical Transformer-based Query by Multiple Documents
    Huang, Zhiqi
    Naseri, Shahrzad
    Bonab, Hamed
    Sarwar, Sheikh Muhammad
    Allan, James
    PROCEEDINGS OF THE 2023 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2023, 2023, : 105 - 115
  • [30] Transformer-Based Extractive Social Media Question Answering on TweetQA
    Butt, Sabur
    Ashraf, Noman
    Fahim, Hammad
    Sidorov, Grigori
    Gelbukh, Alexander
    COMPUTACION Y SISTEMAS, 2021, 25 (01): : 23 - 32