Controllable Generation from Pre-trained Language Models via Inverse Prompting

被引:16
|
作者
Zou, Xu [1 ,2 ]
Yin, Da [1 ,2 ]
Zhong, Qingyang [1 ,2 ]
Yang, Hongxia [4 ]
Yang, Zhilin [2 ,3 ]
Tang, Jie [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
[3] Recurrent AI Ltd, Beijing, Peoples R China
[4] Alibaba Inc, Hangzhou, Peoples R China
基金
国家重点研发计划;
关键词
Language Modeling; Machine Question Answering; Poem Generation; Controllable Generation; Beam Search;
D O I
10.1145/3447548.3467418
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic texts. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, and lack of controllability limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation. The core idea of inverse prompting is to use generated text to inversely predict the prompt during beam search, which enhances the relevance between the prompt and the generated text and thus improves controllability. Empirically, we pre-train a large-scale Chinese language model to perform a systematic study using human evaluation on the tasks of open-domain poem generation and open-domain long-form question answering. Results demonstrate that our proposed method substantially outperforms the baselines and that our generation quality is close to human performance on some of the tasks.
引用
收藏
页码:2450 / 2460
页数:11
相关论文
共 50 条
  • [31] DistillingWord Meaning in Context from Pre-trained Language Models
    Arase, Yuki
    Kajiwara, Tomoyuki
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 534 - 546
  • [32] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [33] Probing for Hyperbole in Pre-Trained Language Models
    Schneidermann, Nina Skovgaard
    Hershcovich, Daniel
    Pedersen, Bolette Sandford
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
  • [34] An Investigation of Suitability of Pre-Trained Language Models for Dialogue Generation - Avoiding Discrepancies
    Zeng, Yan
    Nie, Jian-Yun
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4481 - 4494
  • [35] Addressing Extraction and Generation Separately: Keyphrase Prediction With Pre-Trained Language Models
    Liu, Rui
    Lin, Zheng
    Wang, Weiping
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3180 - 3191
  • [36] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [37] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
    Gira, Michael
    Zhang, Ruisu
    Lee, Kangwook
    [J]. PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
  • [38] Connecting Pre-trained Language Models and Downstream Tasks via Properties of Representations
    Wu, Chenwei
    Lee, Holden
    Ge, Rong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [39] Mining Logical Event Schemas From Pre-Trained Language Models
    Lawley, Lane
    Schubert, Lenhart
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): STUDENT RESEARCH WORKSHOP, 2022, : 332 - 345
  • [40] On the Branching Bias of Syntax Extracted from Pre-trained Language Models
    Li, Huayang
    Liu, Lemao
    Huang, Guoping
    Shi, Shuming
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4473 - 4478