Copy Mechanism and Tailored Training for Character-Based Data-to-Text Generation

被引:1
|
作者
Roberti, Marco [1 ]
Bonetta, Giovanni [1 ]
Cancelliere, Rossella [1 ]
Gallinari, Patrick [2 ,3 ]
机构
[1] Univ Turin, Comp Sci Dept, Via Pessinetto 12, I-12149 Turin, Italy
[2] Sorbonne Univ, 4 Pl Jussieu, F-75005 Paris, France
[3] Criteo AI Lab, 32 Rue Blanche, F-75009 Paris, France
关键词
Natural language processing; Data-to-text generation; Deep learning; Sequence-to-sequence; Dataset;
D O I
10.1007/978-3-030-46147-8_39
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the last few years, many different methods have been focusing on using deep recurrent neural networks for natural language generation. The most widely used sequence-to-sequence neural methods are word-based: as such, they need a pre-processing step called delexicalization (conversely, relexicalization) to deal with uncommon or unknown words. These forms of processing, however, give rise to models that depend on the vocabulary used and are not completely neural. In this work, we present an end-to-end sequence-to-sequence model with attention mechanism which reads and generates at a character level, no longer requiring delexicalization, tokenization, nor even lowercasing. Moreover, since characters constitute the common "building blocks" of every text, it also allows a more general approach to text generation, enabling the possibility to exploit transfer learning for training. These skills are obtained thanks to two major features: (i) the possibility to alternate between the standard generation mechanism and a copy one, which allows to directly copy input facts to produce outputs, and (ii) the use of an original training pipeline that further improves the quality of the generated texts. We also introduce a new dataset called E2E+, designed to highlight the copying capabilities of character-based models, that is a modified version of the well-known E2E dataset used in the E2E Challenge. We tested our model according to five broadly accepted metrics (including the widely used bleu), showing that it yields competitive performance with respect to both character-based and word-based approaches.
引用
收藏
页码:648 / 664
页数:17
相关论文
共 50 条
  • [1] Syntax and Data-to-Text Generation
    Gardent, Claire
    STATISTICAL LANGUAGE AND SPEECH PROCESSING, SLSP 2014, 2014, 8791 : 3 - 20
  • [2] A Case-Based Approach to Data-to-Text Generation
    Upadhyay, Ashish
    Massie, Stewart
    Singh, Ritwik Kumar
    Gupta, Garima
    Ojha, Muneendra
    CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2021, 2021, 12877 : 232 - 247
  • [3] Neural Data-to-Text Generation with LM-based Text Augmentation
    Chang, Ernie
    Shen, Xiaoyu
    Zhu, Dawei
    Demberg, Vera
    Su, Hui
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 758 - 768
  • [4] Improving Compositional Generalization with Self-Training for Data-to-Text Generation
    Mehta, Sanket Vaibhav
    Rao, Jinfeng
    Tay, Yi
    Kale, Mihir
    Parikh, Ankur P.
    Strubell, Emma
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4205 - 4219
  • [5] Data-to-text Generation with Entity Modeling
    Puduppully, Ratish
    Dong, Li
    Lapata, Mirella
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2023 - 2035
  • [6] Neural Methods for Data-to-text Generation
    Sharma, Mandar
    Gogineni, Ajay Kumar
    Ramakrishnan, Naren
    ACM Transactions on Intelligent Systems and Technology, 2024, 15 (05)
  • [7] Data-to-Text Generation with Style Imitation
    Lin, Shuai
    Wang, Wentao
    Yang, Zichao
    Liang, Xiaodan
    Xu, Frank F.
    Xing, Eric P.
    Hu, Zhiting
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1589 - 1598
  • [8] A Survey on Neural Data-to-Text Generation
    Lin, Yupian
    Ruan, Tong
    Liu, Jingping
    Wang, Haofen
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1431 - 1449
  • [9] Data-to-text Generation with Macro Planning
    Puduppully, Ratish
    Lapata, Mirella
    TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 : 510 - 527
  • [10] Entity-Based Semantic Adequacy for Data-to-Text Generation
    Faille, Juliette
    Gatt, Albert
    Gardent, Claire
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1530 - 1540