共 50 条
- [31] Small Language Models Need Strong Verifiers to Self-Correct Reasoning FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 15637 - 15653
- [35] Select, Prompt, Filter: Distilling Large Language Models for Summarizing Conversations 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 12257 - 12265
- [36] Efficient Toxic Content Detection by Bootstrapping and Distilling Large Language Models THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21779 - 21787
- [37] Large Language Models Are Reasoning Teachers PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 14852 - 14882
- [39] Distilling Wisdom: A Review on Optimizing Learning From Massive Language Models IEEE ACCESS, 2025, 13 : 56296 - 56325