共 50 条
- [31] MASTER: Multi-task Pre-trained Bottlenecked Masked Autoencoders Are Better Dense Retrievers [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 630 - 647
- [35] On the effect of dropping layers of pre-trained transformer models [J]. COMPUTER SPEECH AND LANGUAGE, 2022, 77
- [36] Automatic Question Generation using RNN-based and Pre-trained Transformer-based Models in Low Resource Indonesian Language [J]. INFORMATICA-AN INTERNATIONAL JOURNAL OF COMPUTING AND INFORMATICS, 2022, 46 (07): : 103 - 118
- [37] Framing and BERTology: A Data-Centric Approach to Integration of Linguistic Features into Transformer-Based Pre-trained Language Models [J]. INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 4, INTELLISYS 2023, 2024, 825 : 81 - 90
- [40] Photo-based Carbohydrates Counting using Pre-trained Transformer Models [J]. IFAC PAPERSONLINE, 2023, 56 (02): : 11533 - 11538