共 41 条
- [31] Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1512 - 1521
- [32] ROSE Doesn't Do That: Boosting the Safety of Instruction-Tuned Large Language Models with Reverse Prompt Contrastive Decoding FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13721 - 13736
- [35] Efficient Fine-Tuning of Large Language Models via a Low-Rank Gradient Estimator APPLIED SCIENCES-BASEL, 2025, 15 (01):
- [36] Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 7828 - 7840
- [37] Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples APPLIED SCIENCES-BASEL, 2025, 15 (03):
- [38] TS-HTFA: Advancing Time-Series Forecasting via Hierarchical Text-Free Alignment with Large Language Models SYMMETRY-BASEL, 2025, 17 (03):
- [39] Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [40] Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs PROCEEDINGS OF THE 2024 USENIX ANNUAL TECHNICAL CONFERENCE, ATC 2024, 2024, : 699 - 713