共 50 条
- [41] Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12813 - 12832
- [42] Enhancing Machine-Generated Text Detection: Adversarial Fine-Tuning of Pre-Trained Language Models IEEE ACCESS, 2024, 12 : 65333 - 65340
- [43] Comparative Study of Fine-Tuning of Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Screening 2017 24TH NATIONAL AND 2ND INTERNATIONAL IRANIAN CONFERENCE ON BIOMEDICAL ENGINEERING (ICBME), 2017, : 224 - 229
- [44] Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews 2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 1125 - 1134
- [46] Fine-Tuning of Pre-Trained Deep Face Sketch Models Using Smart Switching Slime Mold Algorithm APPLIED SCIENCES-BASEL, 2023, 13 (08):
- [50] HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3246 - 3264