Memory-enhanced text style transfer with dynamic style learning and calibration

被引:0
|
作者
Fuqiang LIN [1 ]
Yiping SONG [1 ]
Zhiliang TIAN [2 ]
Wangqun CHEN [1 ]
Diwen DONG [1 ]
Bo LIU [1 ,3 ]
机构
[1] College of Computer Science and Technology,National University of Defense Technology
[2] Department of Computer Science and Engineering,The Hong Kong University of Science and Technology
[3] Strategic Assessments and Consultation Institute,Academy of Military Sciences
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP391.1 [文字信息处理];
学科分类号
081203 ; 0835 ;
摘要
Text style transfer aims to rephrase a sentence to match the desired style while retaining the original content. As a controllable text generation task, mainstream approaches use content-independent style embedding as control variables to guide stylistic generation. Nonetheless, stylistic properties are contextsensitive even under the same style. For example, “delicious” and “helpful” convey positive sentiments,although they are more likely to describe food and people, respectively. Therefore, desired style signals must vary with the content. To this end, we propose a memory-enhanced transfer method, which learns fine-grained style representation concerning content to assist transfer. Rather than employing static style embedding or latent variables, our method abstracts linguistic characteristics from training corpora and memorizes subdivided content with the corresponding style representations. The style signal is dynamically retrieved from memory using the content as a query, providing a more expressive and flexible latent style space. To address the imbalance between quantity and quality in different content, we further introduce a calibration method to augment memory construction by modeling the relationship between candidate styles.Experimental results obtained using three benchmark datasets confirm the superior performance of our model compared to competitive approaches. The evaluation metrics and case study also indicate that our model can generate diverse stylistic phrases matching context.
引用
收藏
页码:181 / 196
页数:16
相关论文
共 50 条
  • [31] Text Style Transfer Back-Translation
    Wei, Daimeng
    Wu, Zhanglin
    Shang, Hengchao
    Li, Zongyao
    Wang, Minghan
    Guo, Jiaxin
    Chen, Xiaoyu
    Yu, Zhengzhe
    Yang, Hao
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 7944 - 7959
  • [32] Text Style Transfer via Optimal Transport
    Nouri, Nasim
    [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2532 - 2541
  • [33] Adversarial Separation Network for Text Style Transfer
    Yang, Haitong
    Zhou, Guangyou
    He, Tingting
    [J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (02)
  • [34] Synthesizing data for text recognition with style transfer
    Jiahui Li
    Siwei Wang
    Yongtao Wang
    Zhi Tang
    [J]. Multimedia Tools and Applications, 2019, 78 : 29183 - 29196
  • [35] Synthesizing data for text recognition with style transfer
    Li, Jiahui
    Wang, Siwei
    Wang, Yongtao
    Tang, Zhi
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (20) : 29183 - 29196
  • [36] Typography with Decor: Intelligent Text Style Transfer
    Wang, Wenjing
    Liu, Jiaying
    Yang, Shuai
    Guo, Zongming
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5882 - 5890
  • [37] Polite Chatbot: A Text Style Transfer Application
    Mukherjee, Sourabrata
    Hudecek, Vojtech
    Dusek, Ondeej
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 87 - 93
  • [38] Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning
    Liu, Dayiheng
    Fu, Jie
    Zhang, Yidan
    Pal, Chris
    Lv, Jiancheng
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8376 - 8383
  • [39] Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus
    Gong, Hongyu
    Bhat, Suma
    Wu, Lingfei
    Xiong, Jinjun
    Hwu, Wen-Mei
    [J]. 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 3168 - 3180
  • [40] Caster: Cartoon style transfer via dynamic cartoon style casting
    Zhang, Zhanjie
    Sun, Jiakai
    Chen, Jiafu
    Zhao, Lei
    Ji, Boyan
    Lan, Zehua
    Li, Guangyuan
    Xing, Wei
    Xu, Duanqing
    [J]. NEUROCOMPUTING, 2023, 556