Unsupervised Extractive Summarization of Emotion Triggers

被引:0
|
作者
Sosea, Tiberiu [1 ]
Zhan, Hongli [2 ]
Li, Junyi Jessy [2 ]
Caragea, Cornelia [1 ]
机构
[1] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
[2] Univ Texas Austin, Dept Linguist, Austin, TX 78712 USA
基金
美国国家科学基金会;
关键词
WORDS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Understanding what leads to emotions during large-scale crises is important as it can provide groundings for expressed emotions and subsequently improve the understanding of ongoing disasters. Recent approaches (Zhan et al., 2022) trained supervised models to both detect emotions and explain emotion triggers (events and appraisals) via abstractive summarization. However, obtaining timely and qualitative abstractive summaries is expensive and extremely time-consuming, requiring highly-trained expert annotators. In time-sensitive, high-stake contexts, this can block necessary responses. We instead pursue unsupervised systems that extract triggers from text. First, we introduce COVIDET-EXT, augmenting (Zhan et al., 2022)'s abstractive dataset (in the context of the COVID-19 crisis) with extractive triggers. Second, we develop new unsupervised learning models that can jointly detect emotions and summarize their triggers. Our best approach, entitled Emotion-Aware Pagerank, incorporates emotion information from external sources combined with a language understanding module, and outperforms strong baselines. We release our data and code at https://github.com/tsosea2/CovidET-EXT.
引用
下载
收藏
页码:9550 / 9569
页数:20
相关论文
共 50 条
  • [31] An unsupervised method for extractive multi-document summarization based on centroid approach and sentence embeddings
    Lamsiyah, Salima
    El Mahdaouy, Abdelkader
    Espinasse, Bernard
    Ouatik, Said El Alaoui
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 167
  • [32] Long-Span Language Models for Query-Focused Unsupervised Extractive Text Summarization
    Singh, Mittul
    Mishra, Arunav
    Oualil, Youssef
    Berberich, Klaus
    Klakow, Dietrich
    ADVANCES IN INFORMATION RETRIEVAL (ECIR 2018), 2018, 10772 : 657 - 664
  • [33] Augmenting Neural Sentence Summarization Through Extractive Summarization
    Zhu, Junnan
    Zhou, Long
    Li, Haoran
    Zhang, Jiajun
    Zhou, Yu
    Zong, Chengqing
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2017, 2018, 10619 : 16 - 28
  • [34] Fairness of Extractive Text Summarization
    Shandilya, Anurag
    Ghosh, Kripabandhu
    Ghosh, Saptarshi
    COMPANION PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2018 (WWW 2018), 2018, : 97 - 98
  • [35] Extractive Summarization for Myanmar Language
    Lwin, Soe Soe
    Nwet, Khin Thandar
    2018 INTERNATIONAL JOINT SYMPOSIUM ON ARTIFICIAL INTELLIGENCE AND NATURAL LANGUAGE PROCESSING (ISAI-NLP 2018), 2018, : 138 - 143
  • [36] Deep Extractive Text Summarization
    Bhargava, Rupal
    Sharma, Yashvardhan
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND DATA SCIENCE, 2020, 167 : 138 - 146
  • [37] A Review of the Extractive Text Summarization
    Mendoza Becerra, Martha Eliana
    Leon Guzman, Elizabeth
    UIS INGENIERIAS, 2013, 12 (01): : 7 - 27
  • [38] An Overview on Extractive Text Summarization
    Rahimi, Shohreh Rad
    Mozhdehi, Ali Toofanzadeh
    Abdolahi, Mohamad
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED ENGINEERING AND INNOVATION (KBEI), 2017, : 54 - 62
  • [39] Extractive Summarization of Call Transcripts
    Biswas, Pratik K.
    Iakubovich, Aleksandr
    IEEE ACCESS, 2022, 10 : 119826 - 119840
  • [40] A Survey on Extractive Text Summarization
    Moratanch, N.
    Chitrakala, S.
    2017 INTERNATIONAL CONFERENCE ON COMPUTER, COMMUNICATION AND SIGNAL PROCESSING (ICCCSP), 2017, : 265 - 270