共 50 条
- [1] Improving AMR Parsing with Sequence-to-Sequence Pre-training [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2501 - 2511
- [2] Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 571 - 582
- [3] Denoising based Sequence-to-Sequence Pre-training for Text Generation [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 4003 - 4015
- [4] MASS: Masked Sequence to Sequence Pre-training for Language Generation [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
- [5] ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2401 - 2410
- [6] Code Question Answering via Task-Adaptive Sequence-to-Sequence Pre-training [J]. 2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 229 - 238
- [7] SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations [J]. 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2006 - 2018
- [8] Masked Hard Coverage Mechanism on Pointer-generator Network for Natural Language Generation [J]. ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 1177 - 1183
- [9] Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1352 - 1368
- [10] SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence Pre-training [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5583 - 5595