Controllable Generation from Pre-trained Language Models via Inverse Prompting

被引:16
|
作者
Zou, Xu [1 ,2 ]
Yin, Da [1 ,2 ]
Zhong, Qingyang [1 ,2 ]
Yang, Hongxia [4 ]
Yang, Zhilin [2 ,3 ]
Tang, Jie [1 ,2 ]
机构
[1] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Beijing Acad Artificial Intelligence, Beijing, Peoples R China
[3] Recurrent AI Ltd, Beijing, Peoples R China
[4] Alibaba Inc, Hangzhou, Peoples R China
基金
国家重点研发计划;
关键词
Language Modeling; Machine Question Answering; Poem Generation; Controllable Generation; Beam Search;
D O I
10.1145/3447548.3467418
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale pre-trained language models have demonstrated strong capabilities of generating realistic texts. However, it remains challenging to control the generation results. Previous approaches such as prompting are far from sufficient, and lack of controllability limits the usage of language models. To tackle this challenge, we propose an innovative method, inverse prompting, to better control text generation. The core idea of inverse prompting is to use generated text to inversely predict the prompt during beam search, which enhances the relevance between the prompt and the generated text and thus improves controllability. Empirically, we pre-train a large-scale Chinese language model to perform a systematic study using human evaluation on the tasks of open-domain poem generation and open-domain long-form question answering. Results demonstrate that our proposed method substantially outperforms the baselines and that our generation quality is close to human performance on some of the tasks.
引用
收藏
页码:2450 / 2460
页数:11
相关论文
共 50 条
  • [1] Leveraging pre-trained language models for code generation
    Soliman, Ahmed
    Shaheen, Samir
    Hadhoud, Mayada
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3955 - 3980
  • [2] Pre-Trained Language Models for Text Generation: A Survey
    Li, Junyi
    Tang, Tianyi
    Zhao, Wayne Xin
    Nie, Jian-Yun
    Wen, Ji-Rong
    [J]. ACM COMPUTING SURVEYS, 2024, 56 (09)
  • [3] A Survey of Controllable Text Generation Using Transformer-based Pre-trained Language Models
    Zhang, Hanqing
    Song, Haolin
    Li, Shaoyu
    Zhou, Ming
    Song, Dawei
    [J]. ACM COMPUTING SURVEYS, 2024, 56 (03)
  • [4] Probing Power by Prompting: Harnessing Pre-trained Language Models for Power Connotation Framing
    Khanehzar, Shima
    Cohn, Trevor
    Mikolajczak, Gosia
    Frermann, Lea
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 873 - 885
  • [5] Exploring Pre-trained Language Models for Event Extraction and Generation
    Yang, Sen
    Feng, Dawei
    Qiao, Linbo
    Kan, Zhigang
    Li, Dongsheng
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5284 - 5294
  • [6] STYLEDGPT: Stylized Response Generation with Pre-trained Language Models
    Yang, Ze
    Wu, Wei
    Xu, Can
    Liang, Xinnian
    Bai, Jiaqi
    Wang, Liran
    Wang, Wei
    Li, Zhoujun
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1548 - 1559
  • [7] Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
    Yu, Dian
    Yu, Zhou
    Sagae, Kenji
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2251 - 2268
  • [8] Compression of Generative Pre-trained Language Models via Quantization
    Tao, Chaofan
    Hou, Lu
    Zhang, Wei
    Shang, Lifeng
    Jiang, Xin
    Liu, Qun
    Luo, Ping
    Wong, Ngai
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4821 - 4836
  • [9] Multilingual Translation via Grafting Pre-trained Language Models
    Sun, Zewei
    Wang, Mingxuan
    Li, Lei
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2735 - 2747
  • [10] Parallel Corpus Filtering via Pre-trained Language Models
    DiDi Labs
    [J]. arXiv, 2020,