共 50 条
- [2] BERT-NAR-BERT: A Non-Autoregressive Pre-Trained Sequence-to-Sequence Model Leveraging BERT Checkpoints [J]. IEEE ACCESS, 2024, 12 : 23 - 33
- [3] Patent classification with pre-trained Bert model [J]. JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, 2024, 39 (04): : 2485 - 2496
- [4] Interpreting Art by Leveraging Pre-Trained Models [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
- [5] The Lottery Ticket Hypothesis for Pre-trained BERT Networks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
- [6] Modeling essay grading with pre-trained BERT features [J]. APPLIED INTELLIGENCE, 2024, 54 (06) : 4979 - 4993
- [7] Leveraging Pre-Trained Embeddings for Welsh Taggers [J]. 4TH WORKSHOP ON REPRESENTATION LEARNING FOR NLP (REPL4NLP-2019), 2019, : 270 - 280
- [8] Sharing Pre-trained BERT Decoder for a Hybrid Summarization [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019, 2019, 11856 : 169 - 180
- [9] Lattice-BERT: Leveraging Multi-Granularity Representations in Chinese Pre-trained Language Models [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1716 - 1731