共 50 条
- [1] Improving AMR Parsing with Sequence-to-Sequence Pre-training [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2501 - 2511
- [2] Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 571 - 582
- [3] MAPGN: MASKED POINTER-GENERATOR NETWORK FOR SEQUENCE-TO-SEQUENCE PRE-TRAINING [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7563 - 7567
- [4] ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2401 - 2410
- [5] DU-VLG: Unifying Vision-and-Language Generation via Dual Sequence-to-Sequence Pre-training [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2552 - 2566
- [6] MASS: Masked Sequence to Sequence Pre-training for Language Generation [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
- [7] Code Question Answering via Task-Adaptive Sequence-to-Sequence Pre-training [J]. 2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 229 - 238
- [8] SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations [J]. 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2006 - 2018
- [9] Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1352 - 1368
- [10] A Fuzzy Training Framework for Controllable Sequence-to-Sequence Generation [J]. IEEE ACCESS, 2022, 10 : 92467 - 92480