共 50 条
- [1] Pre-training Mention Representations in Coreference Models [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8534 - 8540
- [2] Advances in Pre-Training Distributed Word Representations [J]. PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 52 - 55
- [3] Multi-stage Pre-training over Simplified Multimodal Pre-training Models [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
- [4] Ontology Pre-training for Poison Prediction [J]. ADVANCES IN ARTIFICIAL INTELLIGENCE, KI 2023, 2023, 14236 : 31 - 45
- [7] Better Representations via Adversarial Training in Pre-Training: A Theoretical Perspective [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
- [8] GBERT: Pre-training User representations for Ephemeral Group Recommendation [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2631 - 2639
- [9] Contrastive Representations Pre-Training for Enhanced Discharge Summary BERT [J]. 2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 507 - 508
- [10] Stock return prediction: Stacking a variety of models [J]. JOURNAL OF EMPIRICAL FINANCE, 2022, 67 : 288 - 317