A Semantic Supervision Method for Abstractive Summarization

被引:4
|
作者
Hu, Sunqiang [1 ]
Li, Xiaoyu [1 ]
Deng, Yu [1 ]
Peng, Yu [1 ]
Lin, Bin [2 ]
Yang, Shan [3 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Sichuan Normal Univ, Sch Engn, Chengdu 610066, Peoples R China
[3] Jackson State Univ, Dept Chem Phys & Atmospher Sci, Jackson, MS 39217 USA
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2021年 / 69卷 / 01期
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Text summarization; semantic supervision; capsule network;
D O I
10.32604/cmc.2021.017441
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, many text summarization models based on pretraining methods have achieved very good results. However, in these text summarization models, semantic deviations are easy to occur between the original input representation and the representation that passed multi-layer encoder, which may result in inconsistencies between the generated summary and the source text content. The Bidirectional Encoder Representations from Transformers (BERT) improves the performance of many tasks in Natural Language Processing (NLP). Although BERT has a strong capability to encode context, it lacks the fine-grained semantic representation. To solve these two problems, we proposed a semantic supervision method based on Capsule Network. Firstly, we extracted the fine-grained semantic representation of the input and encoded result in BERT by Capsule Network. Secondly, we used the fine-grained semantic representation of the input to supervise the fine-grained semantic representation of the encoded result. Then we evaluated our model on a popular Chinese social media dataset (LCSTS), and the result showed that our model achieved higher ROUGE scores (including R-1, R-2), and our model outperformed baseline systems. Finally, we conducted a comparative study on the stability of the model, and the experimental results showed that our model was more stable.
引用
收藏
页码:145 / 158
页数:14
相关论文
共 50 条
  • [31] A novel semantic-enhanced generative adversarial network for abstractive text summarization
    Tham Vo
    [J]. Soft Computing, 2023, 27 : 6267 - 6280
  • [32] A novel semantic-enhanced generative adversarial network for abstractive text summarization
    Vo, Tham
    [J]. SOFT COMPUTING, 2023, 27 (10) : 6267 - 6280
  • [33] Abstractive Event Summarization on Twitter
    Li, Quanzhi
    Zhang, Qiong
    [J]. WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, : 22 - 23
  • [34] Global Encoding for Abstractive Summarization
    Lin, Junyang
    Sun, Xu
    Ma, Shuming
    Su, Qi
    [J]. PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, 2018, : 163 - 169
  • [35] An approach to Abstractive Text Summarization
    Huong Thanh Le
    Tien Manh Le
    [J]. 2013 INTERNATIONAL CONFERENCE OF SOFT COMPUTING AND PATTERN RECOGNITION (SOCPAR), 2013, : 371 - 376
  • [36] Abstractive text summarization for Hungarian
    Yang, Zijian Gyozo
    Agocs, Adam
    Kusper, Gabor
    Varadi, Tamas
    [J]. ANNALES MATHEMATICAE ET INFORMATICAE, 2021, 53 : 299 - 316
  • [37] A Survey on Abstractive Text Summarization
    Moratanch, N.
    Chitrakala, S.
    [J]. PROCEEDINGS OF IEEE INTERNATIONAL CONFERENCE ON CIRCUIT, POWER AND COMPUTING TECHNOLOGIES (ICCPCT 2016), 2016,
  • [38] On Faithfulness and Factuality in Abstractive Summarization
    Maynez, Joshua
    Narayan, Shashi
    Bohnet, Bernd
    McDonald, Ryan
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 1906 - 1919
  • [39] A Survey on Abstractive Summarization Techniques
    Rachabathuni, Pavan Kartheek
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTING AND INFORMATICS (ICICI 2017), 2017, : 762 - 765
  • [40] Abstractive Meeting Summarization: A Survey
    Rennard, Virgile
    Shang, Guokan
    Hunter, Julie
    Vazirgiannis, Michalis
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2023, 11 : 861 - 884