共 50 条
- [41] Continual Learning with Pre-Trained Models: A Survey PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8363 - 8371
- [42] On Checking Robustness on Named Entity Recognition with Pre-trained Transformers Models BALTIC JOURNAL OF MODERN COMPUTING, 2023, 11 (04): : 591 - 606
- [43] Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 3011 - 3020
- [45] Logical Transformers: Infusing Logical Structures into Pre-Trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 1762 - 1773
- [46] Finding and Editing Multi-Modal Neurons in Pre-Trained Transformers FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 1012 - 1037
- [49] Machine Unlearning of Pre-trained Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 8403 - 8419
- [50] Harnessing Pre-Trained Sentence Transformers for Offensive Language Detection in Indian Languages CEUR Workshop Proc., (427-434):