共 50 条
- [1] Improving Short Answer Grading Using Transformer-Based Pre-training [J]. ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2019), PT I, 2019, 11625 : 469 - 481
- [2] Ouroboros: On Accelerating Training of Transformer-Based Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [4] On Efficient Transformer-Based Image Pre-training for Low-Level Vision [J]. PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1089 - 1097
- [7] Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
- [8] No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [9] LETCP: A Label-Efficient Transformer-Based Contrastive Pre-Training Method for Brain Tumor Segmentation [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
- [10] Improving the Sample Efficiency of Pre-training Language Models [J]. ERCIM NEWS, 2024, (136): : 38 - 40