共 50 条
- [1] Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation? FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 58 - 67
- [2] ScoutWav: Two-Step Fine-Tuning on Self-Supervised Automatic Speech Recognition for Low-Resource Environments INTERSPEECH 2022, 2022, : 3523 - 3527
- [3] Fine-tuning pretrained transformer encoders for sequence-to-sequence learning International Journal of Machine Learning and Cybernetics, 2024, 15 : 1711 - 1728
- [5] A Dataset for Low-Resource Stylized Sequence-to-Sequence Generation THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9290 - 9297
- [6] Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery in Low-resource Settings INTERSPEECH 2019, 2019, : 2688 - 2692
- [9] Multilingual unsupervised sequence segmentation transfers to extremely low-resource languages PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 5331 - 5346