Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer

被引:0
|
作者
Wang, Yunli [1 ]
Wu, Yu [2 ]
Mou, Lili [3 ]
Li, Zhoujun [1 ]
Chao, Wenhan [1 ]
机构
[1] Beihang Univ, State Key Lab Software Dev Environm, Beijing, Peoples R China
[2] Microsoft Res, Beijing, Peoples R China
[3] Univ Alberta, Dept Comp Sci, Edmonton, AB, Canada
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Formality text style transfer plays an important role in various NLP applications, such as non-native speaker assistants and child education. Early studies normalize informal sentences with rules, before statistical and neural models become a prevailing method in the field. While a rule-based system is still a common preprocessing step for formality style transfer in the neural era, it could introduce noise if we use the rules in a naive way such as data preprocessing. To mitigate this problem, we study how to harness rules into a state-of-the-art neural network that is typically pretrained on massive corpora. We propose three fine-tuning methods in this paper and achieve a new state-of-the-art on benchmark datasets.
引用
收藏
页码:3573 / 3578
页数:6
相关论文
共 50 条
  • [41] Filter pruning by image channel reduction in pre-trained convolutional neural networks
    Chung, Gi Su
    Won, Chee Sun
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (20) : 30817 - 30826
  • [42] Non-Linear State Estimation Using Pre-Trained Neural Networks
    Bayramoglu, Enis
    Andersen, Nils Axel
    Ravn, Ole
    Poulsen, Niels Kjolstad
    2010 IEEE INTERNATIONAL SYMPOSIUM ON INTELLIGENT CONTROL, 2010, : 1509 - 1514
  • [43] Recognizing breast tumors based on mammograms combined with pre-trained neural networks
    Yujie Bai
    Min Li
    Xiaojian Ma
    Xiaojing Gan
    Cheng Chen
    Chen Chen
    Xiaoyi Lv
    Hongtao Li
    Multimedia Tools and Applications, 2023, 82 : 27989 - 28008
  • [44] Self-Supervised Quantization of Pre-Trained Neural Networks for Multiplierless Acceleration
    Vogel, Sebastian
    Springer, Jannik
    Guntoro, Andre
    Ascheid, Gerd
    2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 1094 - 1099
  • [45] Filter pruning by image channel reduction in pre-trained convolutional neural networks
    Gi Su Chung
    Chee Sun Won
    Multimedia Tools and Applications, 2021, 80 : 30817 - 30826
  • [46] Improving weeds identification with a repository of agricultural pre-trained deep neural networks
    Espejo-Garcia, Borja
    Mylonas, Nikolaos
    Athanasakos, Loukas
    Fountas, Spyros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2020, 175
  • [47] Integrally Pre-Trained Transformer Pyramid Networks
    Tian, Yunjie
    Xie, Lingxi
    Wang, Zhaozhi
    Wei, Longhui
    Zhang, Xiaopeng
    Jiao, Jianbin
    Wang, Yaowei
    Tian, Qi
    Ye, Qixiang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18610 - 18620
  • [48] Attentional Masking for Pre-trained Deep Networks
    Wallenberg, Marcus
    Forssen, Per-Erik
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 6149 - 6154
  • [49] BEYOND SIMPLE TEXT STYLE TRANSFER: UNVEILING COMPOUND TEXT STYLE TRANSFER WITH PROMPT-BASED PRE-TRAINED LANGUAGE MODELS
    Ju, Shuai
    Wang, Chenxu
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6850 - 6854
  • [50] RULEBERT: Teaching Soft Rules to Pre-Trained Language Models
    Saeed, Mohammed
    Ahmadi, Naser
    Nakov, Preslav
    Papotti, Paolo
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1460 - 1476