Improving Sign Language Translation with Monolingual Data by Sign Back-Translation

被引:66
|
作者
Zhou, Hao [1 ]
Zhou, Wengang [1 ,2 ]
Qi, Weizhen [1 ]
Pu, Junfu [1 ]
Li, Houqiang [1 ,2 ]
机构
[1] Univ Sci & Technol China, EEIS Dept, CAS Key Lab GIPAS, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
基金
中国国家自然科学基金;
关键词
RECOGNITION;
D O I
10.1109/CVPR46437.2021.00137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite existing pioneering works on sign language translation (SLT), there is a non-trivial obstacle, i.e., the limited quantity of parallel sign-text data. To tackle this parallel data bottleneck, we propose a sign back-translation (SignBT) approach, which incorporates massive spoken language texts into SLT training. With a text-to-gloss translation model, we first back-translate the monolingual text to its gloss sequence. Then, the paired sign sequence is generated by splicing pieces from an estimated gloss-to-sign bank at the feature level. Finally, the synthetic parallel data serves as a strong supplement for the end-to-end training of the encoder-decoder SLT framework. To promote the SLT research, we further contribute CSL-Daily, a large-scale continuous SLT dataset. It provides both spoken language translations and gloss-level annotations. The topic revolves around people's daily lives (e.g., travel, shopping, medical care), the most likely SLT application scenario. Extensive experimental results and analysis of SLT methods are reported on CSL-Daily. With the proposed sign back-translation method, we obtain a substantial improvement over previous state-of-the-art SLT methods.
引用
收藏
页码:1316 / 1325
页数:10
相关论文
共 50 条
  • [1] Scaling Back-Translation with Domain Text Generation for Sign Language Gloss Translation
    Ye, Jinhui
    Jiao, Wenxiang
    Wang, Xing
    Tu, Zhaopeng
    [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 463 - 476
  • [2] Gloss Semantic-Enhanced Network with Online Back-Translation for Sign Language Production
    Tang, Shengeng
    Hong, Richang
    Guo, Dan
    Wang, Meng
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5630 - 5638
  • [3] There and Back Again: 3D Sign Language Generation from Text Using Back-Translation
    Stoll, Stephanie
    Mustafa, Armin
    Guillemaut, Jean-Yves
    [J]. 2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 187 - 196
  • [4] Sign Language Translation
    Harini, R.
    Janani, R.
    Keerthana, S.
    Madhubala, S.
    Venkatasubramanian, S.
    [J]. 2020 6TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING AND COMMUNICATION SYSTEMS (ICACCS), 2020, : 883 - 886
  • [5] Factored Translation Models for improving a Speech into Sign Language Translation System
    Lopez-Ludena, V.
    San-Segundo, R.
    Cordoba, R.
    Ferreiros, J.
    Montero, J. M.
    Pardo, J. M.
    [J]. 12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 1616 - 1619
  • [6] Back-translation in Translation Teaching
    刘聪
    [J]. 读与写(教育教学刊), 2018, 15 (10) : 3 - 3
  • [7] Neural Sign Language Translation
    Camgoz, Necati Cihan
    Hadfield, Simon
    Koller, Oscar
    Ney, Hermann
    Bowden, Richard
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7784 - 7793
  • [8] Translation and Interpretation of Sign Language
    de Quadros, Ronice Mueller
    [J]. CADERNOS DE TRADUCAO, 2010, 26 (02): : 9 - 12
  • [9] Challenges with Sign Language Datasets for Sign Language Recognition and Translation
    De Sisto, Mirella
    Vandeghinste, Vincent
    Gomez, Santiago Egea
    De Coster, Mathieu
    Shterionov, Dimitar
    Saggion, Horacio
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 2478 - 2487
  • [10] Tagged Back-Translation
    Caswell, Isaac
    Chelba, Ciprian
    Grangier, David
    [J]. FOURTH CONFERENCE ON MACHINE TRANSLATION (WMT 2019), VOL 1: RESEARCH PAPERS, 2019, : 53 - 63