共 50 条
- [41] Understanding the Effect of Model Compression on Social Bias in Large Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 2663 - 2675
- [43] Upstream Mitigation Is Not All You Need: Testing the Bias Transfer Hypothesis in Pre-Trained Language Models PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3524 - 3542
- [44] Text Is All You Need: Learning Language Representations for Sequential Recommendation PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1258 - 1267
- [45] Fortify the Shortest Stave in Attention: Enhancing Context Awareness of Large Language Models for Effective Tool-Use PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 11160 - 11174
- [46] Rapid Speaker Adaptation for Conformer Transducer: Attention and Bias are All You Need INTERSPEECH 2021, 2021, : 1309 - 1313
- [48] A Little Bit Attention Is All You Need for Person Re-Identification 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7598 - 7605
- [50] CROSS-ATTENTION WATERMARKING OF LARGE LANGUAGE MODELS 2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4625 - 4629