共 50 条
- [1] Advances in Pre-Training Distributed Word Representations [J]. PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 52 - 55
- [2] From Uniform Models To Generic Representations: Stock Return Prediction With Pre-training [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
- [3] Multi-stage Pre-training over Simplified Multimodal Pre-training Models [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
- [6] Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
- [7] GBERT: Pre-training User representations for Ephemeral Group Recommendation [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2631 - 2639
- [8] Contrastive Representations Pre-Training for Enhanced Discharge Summary BERT [J]. 2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 507 - 508
- [9] A Method of Relation Extraction Using Pre-training Models [J]. 2020 13TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID 2020), 2020, : 176 - 179
- [10] Improving the Sample Efficiency of Pre-training Language Models [J]. ERCIM NEWS, 2024, (136): : 38 - 40