Adapting Pretrained Text-to-Text Models for Long Text Sequences

被引:0
|
作者
Xiong, Wenhan [1 ]
Gupta, Anchit [1 ]
Toshniwal, Shubham [1 ]
Mehdad, Yashar [1 ]
Yih, Wen-tau [1 ]
机构
[1] Meta AI, Menlo Pk, CA 94025 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline - model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.
引用
收藏
页码:5566 / 5578
页数:13
相关论文
共 50 条
  • [1] Leveraging Text-to-Text Pretrained Language Models for Question Answering in Chemistry
    Tran, Dan
    Pascazio, Laura
    Akroyd, Jethro
    Mosbach, Sebastian
    Kraft, Markus
    ACS OMEGA, 2024, 9 (12): : 13883 - 13896
  • [2] ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
    Long Phan
    Hieu Tran
    Hieu Nguyen
    Trinh, Trieu H.
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, 2022, : 136 - 142
  • [3] mT6: Multilingual Pretrained Text-to-Text Transformer with Translation Pairs
    Chi, Zewen
    Dong, Li
    Ma, Shuming
    Huang, Shaohan
    Mao, Xian-Ling
    Huang, Heyan
    Wei, Furu
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1671 - 1683
  • [4] Adapting Pretrained Representations for Text Mining
    Meng, Yu
    Huang, Jiaxin
    Zhang, Yu
    Han, Jiawei
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 4806 - 4807
  • [5] Distractor Generation Through Text-to-Text Transformer Models
    de-Fitero-Dominguez, David
    Garcia-Lopez, Eva
    Garcia-Cabot, Antonio
    del-Hoyo-Gabaldon, Jesus-Angel
    Moreno-Cediel, Antonio
    IEEE ACCESS, 2024, 12 : 25580 - 25589
  • [6] Assessing the Stability of Text-to-Text Models for Keyword Generation Tasks
    Walkowiak, Tomasz
    COMPUTATIONAL SCIENCE, ICCS 2024, PT III, 2024, 14834 : 112 - 119
  • [7] Progressive Generation of Long Text with Pretrained Language Models
    Tan, Bowen
    Yang, Zichao
    Al-Shedivat, Maruan
    Xing, Eric P.
    Hu, Zhiting
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 4313 - 4324
  • [8] Text-to-Text Generative Adversarial Networks
    Li, Changliang
    Su, Yixin
    Liu, Wenju
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [9] mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences
    Uthus, David
    Ontanion, Santiago
    Ainslie, Joshua
    Guo, Mandy
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9380 - 9386
  • [10] Text-To-Text Generation for Issue Report Classification
    Rejithkumar, Gokul
    Anish, Preethu Rose
    Ghaisas, Smita
    PROCEEDINGS 2024 ACM/IEEE INTERNATIONAL WORKSHOP ON NL-BASED SOFTWARE ENGINEERING, NLBSE 2024, 2024, : 53 - 56