共 50 条
- [2] Detoxifying Large Language Models via Kahneman-Tversky Optimization NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024, 2025, 15363 : 409 - 417
- [3] Knowledge Editing of Large Language Models Unconstrained by Word Order PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 4: STUDENT RESEARCH WORKSHOP, 2024, : 177 - 187
- [4] Cross-Lingual Knowledge Editing in Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 11676 - 11686
- [5] InstructEdit: Instruction-Based Knowledge Editing for Large Language Models PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 6633 - 6641
- [6] Editing Factual Knowledge in Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 6491 - 6506
- [7] Editing Personality For Large Language Models NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 241 - 254
- [8] EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 3: SYSTEM DEMONSTRATIONS, 2024, : 82 - 93
- [9] Self-Detoxifying Language Models via Toxification Reversal 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 4433 - 4449
- [10] Challenges in Detoxifying Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2447 - 2469