Multi-level shared-weight encoding for abstractive sentence summarization

被引:0
|
作者
Lal, Daisy Monika [1 ]
Singh, Krishna Pratap [1 ]
Tiwary, Uma Shanker [2 ]
机构
[1] Machine Learning and Optimization Lab, Department of IT, IIIT Allahabad, Uttar Pradesh, Prayagraj,211012, India
[2] Speech Image and Language Processing Lab, Department of IT, IIIT Allahabad, Uttar Pradesh, Prayagraj,211012, India
关键词
Signal encoding - Encoding (symbols);
D O I
暂无
中图分类号
学科分类号
摘要
Features in a text are hierarchically structured and may not be optimally learned using one-step encoding. Scrutinizing the literature several times facilitates a better understanding of content and helps frame faithful context representations. The proposed model encapsulates the idea of re-examining a piece of text multiple times to grasp the underlying theme and aspects of English grammar before formulating a summary. We suggest a multi-level shared-weight encoder (MSE) that exclusively focuses on the sentence summarization task. MSE exercises a weight-sharing mechanism for proficiently regulating the multi-level encoding process. Weight-sharing helps recognize patterns left undiscovered by single level encoding strategy. We perform experiments with six encoding levels with weight sharing on the renowned short sentence summarization Gigaword and DUC2004 Task1 datasets. The experiments show that MSE generates a more readable(fluent) summary (Rouge-L score) as compared to multiple benchmark models while preserving similar levels of informativeness (Rouge-1 and Rouge-2 scores). Moreover, human evaluation of the generated abstracts also corroborates these assertions of enhanced readability. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
引用
收藏
页码:2965 / 2981
相关论文
共 50 条
  • [41] A multi-level feature weight fusion model for salient object detection
    Zhang, Shanqing
    Chen, Yujie
    Meng, Yiheng
    Lu, Jianfeng
    Li, Li
    Bai, Rui
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (03) : 887 - 895
  • [42] A multi-level feature weight fusion model for salient object detection
    Zhang Shanqing
    Chen Yujie
    Meng Yiheng
    Lu Jianfeng
    Li Li
    Bai Rui
    [J]. Multimedia Systems, 2023, 29 : 887 - 895
  • [43] Structural dynamic model updating based on multi-level weight coefficients
    Chen, Luyun
    Guo, Yongjin
    Li, Leixin
    [J]. APPLIED MATHEMATICAL MODELLING, 2019, 71 : 700 - 711
  • [44] Sentence modeling via multiple word embeddings and multi-level comparison for semantic textual similarity
    Nguyen Huy Tien
    Nguyen Minh Le
    Tomohiro, Yamasaki
    Tatsuya, Izuha
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2019, 56 (06)
  • [45] Multi-level uncorrelated discriminative shared Gaussian process for multi-view facial expression recognition
    Sunil Kumar
    M. K. Bhuyan
    Yuji Iwahori
    [J]. The Visual Computer, 2021, 37 : 143 - 159
  • [46] Multi-level uncorrelated discriminative shared Gaussian process for multi-view facial expression recognition
    Kumar, Sunil
    Bhuyan, M. K.
    Iwahori, Yuji
    [J]. VISUAL COMPUTER, 2021, 37 (01): : 143 - 159
  • [47] Vocabulary Pyramid Network: Multi-Pass Encoding and Decoding with Multi-Level Vocabularies for Response Generation
    Liu, Cao
    He, Shizhu
    Liu, Kang
    Zhao, Jun
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3774 - 3783
  • [48] Shared leadership in project teams: An integrative multi-level conceptual model and research agenda
    Scott-Young, Christina M.
    Georgy, Maged
    Grisinger, Andrew
    [J]. INTERNATIONAL JOURNAL OF PROJECT MANAGEMENT, 2019, 37 (04) : 565 - 581
  • [49] Ladder Codes: A Class of Error-Correcting Codes with Multi-Level Shared Redundancy
    Huang, Pengfei
    Yaakobi, Eitan
    Siegel, Paul H.
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2018,
  • [50] Generating random grid-based visual secret sharing with multi-level encoding
    Chao, Her Chang
    Fan, Tzuo Yau
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2017, 57 : 60 - 67