Counterfactuals to Control Latent Disentangled Text Representations for Style Transfer

被引:0
|
作者
Nangi, Sharmila Reddy [1 ]
Chhaya, Niyati [1 ]
Khosla, Sopan [1 ,2 ]
Kaushik, Nikhil [1 ,3 ]
Nyati, Harshit [1 ,4 ]
机构
[1] Adobe Res, Kharagpur, W Bengal, India
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Cohes Storage Solut, Bengaluru, India
[4] Adobe Syst, Noida, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Disentanglement of latent representations into content and style spaces has been a commonly employed method for unsupervised text style transfer. These techniques aim to learn the disentangled representations and tweak them to modify the style of a sentence. In this paper, we propose a counterfactual-based method to modify the latent representation, by posing a 'what-if' scenario. This simple and disciplined approach also enables a fine-grained control on the transfer strength. We conduct experiments with the proposed methodology on multiple attribute transfer tasks like Sentiment, Formality and Excitement to support our hypothesis.
引用
收藏
页码:40 / 48
页数:9
相关论文
共 50 条
  • [1] Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation
    Dai, Ning
    Liang, Jianze
    Qiu, Xipeng
    Huang, Xuanjing
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 5997 - 6007
  • [2] Text Style Transfer: Leveraging a Style Classifier on Entangled Latent Representations
    Li, Xiaoyan
    Sun, Sun
    Wang, Yunli
    REPL4NLP 2021: PROCEEDINGS OF THE 6TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP, 2021, : 72 - 82
  • [3] Towards Robust and Semantically Organised Latent Representations for Unsupervised Text Style Transfer
    Narasimhan, Sharan
    Dey, Suvodip
    Desarkar, Maunendra
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 475 - 493
  • [4] Musical Composition Style Transfer via Disentangled Timbre Representations
    Hung, Yun-Ning
    Chiang, I-Tung
    Chen, Yi-An
    Yang, Yi-Hsuan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4697 - 4703
  • [5] Disentangled Representation Learning for Non-Parallel Text Style Transfer
    John, Vineet
    Mou, Lili
    Bahuleyan, Hareesh
    Vechtomova, Olga
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 424 - 434
  • [6] LEARNING LATENT REPRESENTATIONS FOR STYLE CONTROL AND TRANSFER IN END-TO-END SPEECH SYNTHESIS
    Zhang, Ya-Jie
    Pan, Shifeng
    He, Lei
    Ling, Zhen-Hua
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 6945 - 6949
  • [7] Text Style Transfer via Learning Style Instance Supported Latent Space
    Yi, Xiaoyuan
    Liu, Zhenghao
    Li, Wenhao
    Sun, Maosong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3801 - 3807
  • [8] Handwritten Text Generation via Disentangled Representations
    Liu, Xiyan
    Meng, Gaofeng
    Xiang, Shiming
    Pan, Chunhong
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1838 - 1842
  • [9] Learning Time Series Counterfactuals via Latent Space Representations
    Wang, Zhendong
    Samsten, Isak
    Mochaourab, Rami
    Papapetrou, Panagiotis
    DISCOVERY SCIENCE (DS 2021), 2021, 12986 : 369 - 384
  • [10] Semi-supervised Text Style Transfer: Cross Projection in Latent Space
    Shang, Mingyue
    Li, Piji
    Fu, Zhenxin
    Bing, Lidong
    Zhao, Dongyan
    Shi, Shuming
    Yan, Rui
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 4937 - 4946