共 50 条
- [22] Probing for Hyperbole in Pre-Trained Language Models [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
- [23] Adopting Pre-trained Large Language Models for Regional Language Tasks: A Case Study [J]. INTELLIGENT HUMAN COMPUTER INTERACTION, IHCI 2023, PT I, 2024, 14531 : 15 - 25
- [24] Controllable Generation from Pre-trained Language Models via Inverse Prompting [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2450 - 2460
- [25] An Investigation of Suitability of Pre-Trained Language Models for Dialogue Generation - Avoiding Discrepancies [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4481 - 4494
- [27] Attribute Alignment: Controlling Text Generation from Pre-trained Language Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2251 - 2268
- [28] Clinical efficacy of pre-trained large language models through the lens of aphasia [J]. SCIENTIFIC REPORTS, 2024, 14 (01):
- [30] The Use and Misuse of Pre-Trained Generative Large Language Models in Reliability Engineering [J]. 2024 ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, RAMS, 2024,