共 40 条
- [21] Towards Mitigating Hallucination in Large Language Models via Self-Reflection FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 1827 - 1843
- [22] Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [24] Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 340 - 351
- [25] Mitigating spatial hallucination in large language models for path planning via prompt engineering SCIENTIFIC REPORTS, 2025, 15 (01):
- [26] Assessing Inherent Biases Following Prompt Compression of Large Language Models for Game Story Generation 2024 IEEE CONFERENCE ON GAMES, COG 2024, 2024,
- [27] Subtle Biases Need Subtler Measures: Dual Metrics for Evaluating Representative and Affinity Bias in Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 375 - 392
- [28] Mitigating Reversal Curse in Large Language Models via Semantic-aware Permutation Training FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 11453 - 11464
- [29] Mitigating Hallucination in Visual-Language Models via Re-balancing Contrastive Decoding PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 482 - 496
- [30] CommonIT: Commonality-Aware Instruction Tuning for Large Language Models via Data Partitions EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 2024, : 10064 - 10083