Bridging the Structural Gap Between Encoding and Decoding for Data-To-Text Generation

被引:0
|
作者
Zhao, Chao [1 ]
Walker, Marilyn [2 ]
Chaturvedi, Snigdha [1 ]
机构
[1] Univ North Carolina Chapel Hill, Dept Comp Sci, Chapel Hill, NC 27599 USA
[2] Univ Calif Santa Cruz, Nat Language & Dialog Syst Lab, Santa Cruz, CA 95064 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Generating sequential natural language descriptions from graph-structured data (e.g., knowledge graph) is challenging, partly because of the structural differences between the input graph and the output text. Hence, popular sequence-to-sequence models, which require serialized input, are not a natural fit for this task. Graph neural networks, on the other hand, can better encode the input graph but broaden the structural gap between the encoder and decoder, making faithful generation difficult. To narrow this gap, we propose DUA-LENC, a dual encoding model that can not only incorporate the graph structure, but can also cater to the linear structure of the output text. Empirical comparisons with strong single-encoder baselines demonstrate that dual encoding can significantly improve the quality of the generated text.
引用
收藏
页码:2481 / 2491
页数:11
相关论文
共 50 条
  • [1] Syntax and Data-to-Text Generation
    Gardent, Claire
    [J]. STATISTICAL LANGUAGE AND SPEECH PROCESSING, SLSP 2014, 2014, 8791 : 3 - 20
  • [2] Data-to-text Generation with Entity Modeling
    Puduppully, Ratish
    Dong, Li
    Lapata, Mirella
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2023 - 2035
  • [3] Neural Methods for Data-to-text Generation
    Sharma, Mandar
    Gogineni, Ajay Kumar
    Ramakrishnan, Naren
    [J]. ACM Transactions on Intelligent Systems and Technology, 2024, 15 (05)
  • [4] Data-to-Text Generation with Style Imitation
    Lin, Shuai
    Wang, Wentao
    Yang, Zichao
    Liang, Xiaodan
    Xu, Frank F.
    Xing, Eric P.
    Hu, Zhiting
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1589 - 1598
  • [5] Data-to-text Generation with Macro Planning
    Puduppully, Ratish
    Lapata, Mirella
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2021, 9 : 510 - 527
  • [6] A Survey on Neural Data-to-Text Generation
    Lin, Yupian
    Ruan, Tong
    Liu, Jingping
    Wang, Haofen
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1431 - 1449
  • [7] Data-to-text Generation with Variational Sequential Planning
    Puduppully, Ratish
    Fu, Yao
    Lapata, Mirella
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2022, 10 : 697 - 715
  • [8] Data-to-Text Generation with Attention Recurrent Unit
    Wang, Hechong
    Zhang, Wei
    Zhu, Yuesheng
    Bai, Zhiqiang
    [J]. 2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [9] Building RDF content for data-to-text generation
    [J]. 1600, Association for Computational Linguistics, ACL Anthology
  • [10] A benchmark dataset for Turkish data-to-text generation
    Demir, Seniz
    Oktem, Seza
    [J]. COMPUTER SPEECH AND LANGUAGE, 2023, 77