共 50 条
- [2] Commonsense Knowledge Base Completion with Relational Graph Attention Network and Pre-trained Language Model [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4104 - 4108
- [3] Knowledge-Infused Pre-trained Models for KG Completion [J]. WEB INFORMATION SYSTEMS ENGINEERING, WISE 2020, PT I, 2020, 12342 : 273 - 285
- [4] Pre-trained Language Model with Prompts for Temporal Knowledge Graph Completion [J]. arXiv, 2023,
- [5] Pre-Trained Image Processing Transformer [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
- [6] Adder Encoder for Pre-trained Language Model [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 339 - 347
- [7] Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6802 - 6811
- [8] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294
- [9] A Multi-layer Bidirectional Transformer Encoder for Pre-trained Word Embedding: A Survey of BERT [J]. PROCEEDINGS OF THE CONFLUENCE 2020: 10TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING, DATA SCIENCE & ENGINEERING, 2020, : 336 - 340
- [10] Knowledge Base Grounded Pre-trained Language Models via Distillation [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625