共 50 条
- [1] Context Compression and Extraction: Efficiency Inference of Large Language Models ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14875 : 221 - 232
- [2] LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13358 - 13376
- [3] Measuring and Improving the Energy Efficiency of Large Language Models Inference IEEE ACCESS, 2024, 12 : 80194 - 80207
- [4] Language Models for Lexical Inference in Context 16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1267 - 1280
- [5] Inference to the Best Explanation in Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 217 - 235
- [6] Assessing Inference Time in Large Language Models SYSTEM DEPENDABILITY-THEORY AND APPLICATIONS, DEPCOS-RELCOMEX 2024, 2024, 1026 : 296 - 305
- [8] Sources of Hallucination by Large Language Models on Inference Tasks FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2758 - 2774
- [9] Compressing Huffman Models on Large Alphabets 2013 DATA COMPRESSION CONFERENCE (DCC), 2013, : 381 - 390
- [10] Improving Causal Inference of Large Language Models with SCM Tools NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 3 - 14