共 50 条
- [1] RAPT: Pre-training of Time-Aware Transformer for Learning Robust Healthcare Representation [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 3503 - 3511
- [2] Improving Transformer-based Speech Recognition with Unsupervised Pre-training and Multi-task Semantic Knowledge Learning [J]. INTERSPEECH 2020, 2020, : 5006 - 5010
- [3] Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning [J]. 2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
- [4] Pre-training Strategies and Datasets for Facial Representation Learning [J]. COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 107 - 125
- [7] Unsupervised Pre-Training for Voice Activation [J]. APPLIED SCIENCES-BASEL, 2020, 10 (23): : 1 - 13
- [9] Multilingual Molecular Representation Learning via Contrastive Pre-training [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
- [10] RePreM: Representation Pre-training with Masked Model for Reinforcement Learning [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 6879 - 6887