共 50 条
- [31] DEEP: DEnoising Entity Pre-training for Neural Machine Translation PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1753 - 1766
- [33] Improving AMR-to-text Generation with Multi-task Pre-training Ruan Jian Xue Bao/Journal of Software, 2021, 32 (10): : 3036 - 3050
- [35] Multi-task Pre-training for Lhasa-Tibetan Speech Recognition ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IX, 2023, 14262 : 78 - 90
- [36] Low-Resource Named Entity Recognition via the Pre-Training Model SYMMETRY-BASEL, 2021, 13 (05):
- [37] Improving News Recommendation via Bottlenecked Multi-task Pre-training PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2082 - 2086
- [38] MULTI-TASK SELF-SUPERVISED PRE-TRAINING FOR MUSIC CLASSIFICATION 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 556 - 560
- [39] On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2900 - 2907
- [40] Improving Robustness of Neural Machine Translation with Multi-task Learning FOURTH CONFERENCE ON MACHINE TRANSLATION (WMT 2019), 2019, : 565 - 571