共 50 条
- [1] Exploring the Role of Monolingual Data in Cross-Attention Pre-training for Neural Machine Translation [J]. COMPUTATIONAL COLLECTIVE INTELLIGENCE, ICCCI 2023, 2023, 14162 : 179 - 190
- [2] Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1754 - 1765
- [4] Cross Aggregation of Multi-head Attention for Neural Machine Translation [J]. NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING (NLPCC 2019), PT I, 2019, 11838 : 380 - 392
- [5] Look Harder: A Neural Machine Translation Model with Hard Attention [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3037 - 3043
- [6] A Visual Attention Grounding Neural Model for Multimodal Machine Translation [J]. 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 3643 - 3653
- [8] Neural Topic Modeling with Gaussian Mixture Model and Householder Flow [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 417 - 428
- [9] Recurrent Attention for Neural Machine Translation [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3216 - 3225