Adapting Static and Contextual Representations for Policy Gradient-Based Summarization

被引:0
|
作者
Lin, Ching-Sheng [1 ]
Jwo, Jung-Sing [1 ,2 ]
Lee, Cheng-Hsiung [1 ]
机构
[1] Tunghai Univ, Master Program Digital Innovat, Taichung 40704, Taiwan
[2] Tunghai Univ, Dept Comp Sci, Taichung 40704, Taiwan
关键词
automatic text summarization; GloVe; BERT; GPT; unsupervised training; policy gradient reinforcement learning;
D O I
10.3390/s23094513
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Considering the ever-growing volume of electronic documents made available in our daily lives, the need for an efficient tool to capture their gist increases as well. Automatic text summarization, which is a process of shortening long text and extracting valuable information, has been of great interest for decades. Due to the difficulties of semantic understanding and the requirement of large training data, the development of this research field is still challenging and worth investigating. In this paper, we propose an automated text summarization approach with the adaptation of static and contextual representations based on an extractive approach to address the research gaps. To better obtain the semantic expression of the given text, we explore the combination of static embeddings from GloVe (Global Vectors) and the contextual embeddings from BERT (Bidirectional Encoder Representations from Transformer) and GPT (Generative Pre-trained Transformer) based models. In order to reduce human annotation costs, we employ policy gradient reinforcement learning to perform unsupervised training. We conduct empirical studies on the public dataset, Gigaword. The experimental results show that our approach achieves promising performance and is competitive with various state-of-the-art approaches.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Gradient-based policy iteration: An example
    Cao, XR
    Fang, HT
    PROCEEDINGS OF THE 41ST IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-4, 2002, : 3367 - 3371
  • [2] An analysis of gradient-based policy iteration
    Dankert, J
    Yang, L
    Jennie, S
    PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 2977 - 2982
  • [3] Adapting lacunarity techniques for gradient-based analyses of landscape surfaces
    Hoechstetter, Sebastian
    Walz, Ulrich
    Nguyen Xuan Thinh
    ECOLOGICAL COMPLEXITY, 2011, 8 (03) : 229 - 238
  • [4] OPEN-SET RECOGNITION WITH GRADIENT-BASED REPRESENTATIONS
    Lee, Jinsol
    AlRegib, Ghassan
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 469 - 473
  • [5] HASumRuNNer: An Extractive Text Summarization Optimization Model Based on a Gradient-Based Algorithm
    Muljono
    Nababan, Mangatur Rudolf
    Nugroho, Raden Arief
    Djajadinata, Kevin
    JOURNAL OF ADVANCES IN INFORMATION TECHNOLOGY, 2023, 14 (04) : 656 - 667
  • [6] Sparse Gradient-Based Direct Policy Search
    Sokolovska, Nataliya
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT IV, 2012, 7666 : 212 - 221
  • [7] Gradient-Based Aeroservoelastic Optimization with Static Output Feedback
    Stanford, Bret K.
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2019, 42 (10) : 2314 - 2318
  • [8] Gradient-based stochastic extremum seeking for static maps with delays
    Yang, Xiao
    Yin, Chun
    Chang, Yuhua
    Wang, Peng
    Huang, Xuegang
    Zhong, Shotuning
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 674 - 679
  • [9] Gradient-Based Algorithms With Intermediate Observations in Static and Differential Games
    Hossain, Mohammad Safayet
    Simaan, Marwan A.
    Qu, Zhihua
    IEEE ACCESS, 2025, 13 : 2694 - 2704
  • [10] A performance gradient perspective on gradient-based policy iteration and a modified value iteration
    Yang, Lei
    Dankert, James
    Si, Jennie
    INTERNATIONAL JOURNAL OF INTELLIGENT COMPUTING AND CYBERNETICS, 2008, 1 (04) : 509 - 520