Long Text Summarization and Key Information Extraction in a Multi-Task Learning Framework

被引:0
|
作者
Lu, Ming [1 ]
Chen, Rongfa [1 ]
机构
[1] College of Management and Economy, Tianjin University, Tianjin,300072, China
关键词
Attention mechanisms - Empirical evaluations - Key information extraction - Learning frameworks - Long text summarization - Loss functions - Multitask learning - Text Summarisation - Text-based information - Training phasis;
D O I
10.2478/amns-2024-1659
中图分类号
学科分类号
摘要
In the context of the rapid advancement of big data and artificial intelligence, there has been an unprecedented surge in text-based information. This proliferation necessitates the development of efficient and accurate techniques for text summarization. This paper addresses this need by articulating the challenges associated with text summarization and key information extraction. We introduce a novel model that integrates multi-task learning with an attention mechanism to enhance the summarization and extraction of long texts. Furthermore, we establish a loss function for the model, calibrated against the discrepancy observed during the training phase. Empirical evaluations were conducted through simulated experiments after pre-processing the data via the proposed extraction model. These evaluations indicate that the model achieves optimal performance in the iterative training range of 55 to 65. When benchmarked against comparative models, our model demonstrates superior performance in extracting long text summaries and key information, evidenced by the metrics on the Daily Mail dataset (mean scores: 40.19, 16.42, 35.48) and the Gigaword dataset (mean scores: 34.38, 16.21, 31.38). Overall, the model developed in this study proves to be highly effective and practical in extracting long text summaries and key information, thereby significantly enhancing the efficiency of processing textual data. © 2024 Ming Lu et al., published by Sciendo.
引用
收藏
相关论文
共 50 条
  • [21] Multi-Task Learning for Cross-Lingual Abstractive Summarization
    Takase, Sho
    Okazaki, Naoaki
    [J]. 2022 Language Resources and Evaluation Conference, LREC 2022, 2022, : 3008 - 3016
  • [22] TASK AWARE MULTI-TASK LEARNING FOR SPEECH TO TEXT TASKS
    Indurthi, Sathish
    Zaidi, Mohd Abbas
    Lakumarapu, Nikhil Kumar
    Lee, Beomseok
    Han, Hyojung
    Ahn, Seokchan
    Kim, Sangha
    Kim, Chanwoo
    Hwang, Inchul
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7723 - 7727
  • [23] Multi-Task Learning for Cross-Lingual Abstractive Summarization
    Takase, Sho
    Okazaki, Naoaki
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 3008 - 3016
  • [24] Prototype Feature Extraction for Multi-task Learning
    Xin, Shen
    Jiao, Yuhang
    Long, Cheng
    Wang, Yuguang
    Wang, Xiaowei
    Yang, Sen
    Liu, Ji
    Zhang, Jie
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 2472 - 2481
  • [25] Multi-task learning framework for echocardiography segmentation
    Monkam, Patrice
    Jin, Songbai
    Lu, Wenkai
    [J]. 2022 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IEEE IUS), 2022,
  • [26] MCapsNet: Capsule Network for Text with Multi-Task Learning
    Xiao, Liqiang
    Zhang, Honglun
    Chen, Wenqing
    Wang, Yongkun
    Jin, Yaohui
    [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 4565 - 4574
  • [27] Adaptive multi-task learning for speech to text translation
    Feng, Xin
    Zhao, Yue
    Zong, Wei
    Xu, Xiaona
    [J]. EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2024, 2024 (01):
  • [28] Enhancing Text2SQL Generation with Syntactic Information and Multi-task Learning
    Li, Haochen
    Nuo, Minghua
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 377 - 388
  • [29] Retaining Privileged Information for Multi-Task Learning
    Tang, Fengyi
    Xiao, Cao
    Wang, Fei
    Zhou, Jiayu
    Lehman, Li-wei H.
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1369 - 1377
  • [30] Multi-task Learning by Leveraging the Semantic Information
    Zhou, Fan
    Chaib-draa, Brahim
    Wang, Boyu
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11088 - 11096