共 50 条
- [31] Does Masked Language Model Pre-training with Artificial Data Improve Low-resource Neural Machine Translation? [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 2216 - 2225
- [33] Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9652 - 9659
- [36] Improving Stylized Neural Machine Translation with Iterative Dual Knowledge Transfer [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3971 - 3977
- [37] Unsupervised Pre-training for Fully Convolutional Neural Networks [J]. 2016 PATTERN RECOGNITION ASSOCIATION OF SOUTH AFRICA AND ROBOTICS AND MECHATRONICS INTERNATIONAL CONFERENCE (PRASA-ROBMECH), 2016,
- [38] Product-oriented Machine Translation with Cross-modal Cross-lingual Pre-training [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2843 - 2852
- [40] Matrix product state pre-training for quantum machine learning [J]. QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):