Counterfactuals to Control Latent Disentangled Text Representations for Style Transfer

被引:0
|
作者
Nangi, Sharmila Reddy [1 ]
Chhaya, Niyati [1 ]
Khosla, Sopan [1 ,2 ]
Kaushik, Nikhil [1 ,3 ]
Nyati, Harshit [1 ,4 ]
机构
[1] Adobe Res, Kharagpur, W Bengal, India
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Cohes Storage Solut, Bengaluru, India
[4] Adobe Syst, Noida, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Disentanglement of latent representations into content and style spaces has been a commonly employed method for unsupervised text style transfer. These techniques aim to learn the disentangled representations and tweak them to modify the style of a sentence. In this paper, we propose a counterfactual-based method to modify the latent representation, by posing a 'what-if' scenario. This simple and disciplined approach also enables a fine-grained control on the transfer strength. We conduct experiments with the proposed methodology on multiple attribute transfer tasks like Sentiment, Formality and Excitement to support our hypothesis.
引用
收藏
页码:40 / 48
页数:9
相关论文
共 50 条
  • [21] Orthogonality-Enforced Latent Space in Autoencoders: An Approach to Learning Disentangled Representations
    Cha, Jaehoon
    Thiyagalingam, Jeyan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [22] Robust unsupervised image categorization based on variational autoencoder with disentangled latent representations
    Yang, Lin
    Fan, Wentao
    Bouguila, Nizar
    KNOWLEDGE-BASED SYSTEMS, 2022, 246
  • [23] Latent Style: multi-style image transfer via latent style coding and skip connection
    Hu, Jingfei
    Wu, Guang
    Wang, Hua
    Zhang, Jicong
    SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (02) : 359 - 368
  • [24] Latent Style: multi-style image transfer via latent style coding and skip connection
    Jingfei Hu
    Guang Wu
    Hua Wang
    Jicong Zhang
    Signal, Image and Video Processing, 2022, 16 : 359 - 368
  • [25] Structural MRI Harmonization via Disentangled Latent Energy-Based Style Translation
    Wu, Mengqi
    Zhang, Lintao
    Yap, Pew-Thian
    Lin, Weili
    Zhu, Hongtu
    Liu, Mingxia
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2023, PT I, 2024, 14348 : 1 - 11
  • [26] Latent representation discretization for unsupervised text style generation
    Gao, Yang
    Liu, Qianhui
    Yang, Yizhe
    Wang, Ke
    INFORMATION PROCESSING & MANAGEMENT, 2024, 61 (03)
  • [27] Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations
    Michieli, Umberto
    Zanuttigh, Pietro
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1114 - 1124
  • [28] The visual representations of words and style in text: An adaptation study
    Hanif, Hashim M.
    Perler, Brielle L.
    Barton, Jason J. S.
    BRAIN RESEARCH, 2013, 1518 : 61 - 70
  • [29] Improving Text Models with Latent Feature Vector Representations
    Peng Huaijin
    Wang Jing
    Shen Qiwei
    2019 13TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC), 2019, : 154 - 157
  • [30] Disentangled Representations for Continual Learning: Overcoming Forgetting and Facilitating Knowledge Transfer
    Xu, Zhaopeng
    Qin, Qi
    Liu, Bing
    Zhao, Dongyan
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, PT IV, ECML PKDD 2024, 2024, 14944 : 143 - 159