Transformer-based highlights extraction from scientific papers

被引:6
|
作者
La Quatra, Moreno [1 ]
Cagliero, Luca [1 ]
机构
[1] Politecn Torino, Dipartimento Automat & Informat, Corso Duca Abruzzi 24, I-10129 Turin, Italy
关键词
Highlights extraction; Transformer model; Extractive summarization;
D O I
10.1016/j.knosys.2022.109382
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Highlights are short sentences used to annotate scientific papers. They complement the abstract content by conveying the main result findings. To automate the process of paper annotation, highlights extraction aims at extracting from 3 to 5 paper sentences via supervised learning. Existing approaches rely on ad hoc linguistic features, which depend on the analyzed context, and apply recurrent neural networks, which are not effective in learning long-range text dependencies. This paper leverages the attention mechanism adopted in transformer models to improve the accuracy of sentence relevance estimation. Unlike existing approaches, it relies on the end-to-end training of a deep regression model. To attend patterns relevant to highlights content it also enriches sentence encodings with a section-level contextualization. The experimental results, achieved on three different benchmark datasets, show that the designed architecture is able to achieve significant performance improvements compared to the state-of-the-art. (c) 2022 Published by Elsevier B.V.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Swin Transformer-Based Multiscale Attention Model for Landslide Extraction From Large-Scale Area
    Gao, Mengjie
    Chen, Fang
    Wang, Lei
    Zhao, Huichen
    Yu, Bo
    [J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62
  • [22] Contextualized medication information extraction using Transformer-based deep learning architectures
    Chen, Aokun
    Yu, Zehao
    Yang, Xi
    Guo, Yi
    Bian, Jiang
    Wu, Yonghui
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2023, 142
  • [23] Reliable object tracking by multimodal hybrid feature extraction and transformer-based fusion
    Sun, Hongze
    Liu, Rui
    Cai, Wuque
    Wang, Jun
    Wang, Yue
    Tang, Huajin
    Cui, Yan
    Yao, Dezhong
    Guo, Daqing
    [J]. NEURAL NETWORKS, 2024, 178
  • [24] Leveraging Semantic Text Analysis to Improve the Performance of Transformer-Based Relation Extraction
    Evans, Marie-Therese Charlotte
    Latifi, Majid
    Ahsan, Mominul
    Haider, Julfikar
    [J]. INFORMATION, 2024, 15 (02)
  • [25] Integration and sharing of scientific papers based on automatic metadata extraction
    Cui, Binge
    [J]. Journal of Computational Information Systems, 2010, 6 (02): : 411 - 418
  • [26] Two-stage contextual transformer-based convolutional neural network for airway extraction from CT images
    Wu, Yanan
    Zhao, Shuiqing
    Qi, Shouliang
    Feng, Jie
    Pang, Haowen
    Chang, Runsheng
    Bai, Long
    Li, Mengqi
    Xia, Shuyue
    Qian, Wei
    Ren, Hongliang
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2023, 143
  • [27] Predicting Discourse Trees from Transformer-based Neural Summarizers
    Xiao, Wen
    Huber, Patrick
    Carenini, Giuseppe
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 4139 - 4152
  • [28] 3D Dental Biometrics: Transformer-based Dental Arch Extraction and Matching
    Zhang, Zhiyuan
    Zhong, Xin
    [J]. 2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 139 - 140
  • [29] TP-DDI: Transformer-based pipeline for the extraction of Drug-Drug Interactions
    Zaikis, Dimitrios
    Vlahavas, Ioannis
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2021, 119
  • [30] Transformer-Based Approach to Melanoma Detection
    Cirrincione, Giansalvo
    Cannata, Sergio
    Cicceri, Giovanni
    Prinzi, Francesco
    Currieri, Tiziana
    Lovino, Marta
    Militello, Carmelo
    Pasero, Eros
    Vitabile, Salvatore
    [J]. SENSORS, 2023, 23 (12)