共 50 条
- [21] Knowledge Base Grounded Pre-trained Language Models via Distillation [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625
- [22] General Cross-Architecture Distillation of Pretrained Language Models into Matrix Embeddings [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
- [24] Beyond Structural Causal Models: Causal Constraints Models [J]. 35TH UNCERTAINTY IN ARTIFICIAL INTELLIGENCE CONFERENCE (UAI 2019), 2020, 115 : 585 - 594
- [25] A causal partition of trait correlations: using graphical models to derive statistical models from theoretical language [J]. ECOSPHERE, 2018, 9 (09):
- [27] Causal State Distillation for Explainable Reinforcement Learning [J]. CAUSAL LEARNING AND REASONING, VOL 236, 2024, 236 : 106 - 142
- [30] ReAugKD: Retrieval-Augmented Knowledge Distillation For Pre-trained Language Models [J]. 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1128 - 1136