共 50 条
- [2] Evaluation of FractalDB Pre-training with Vision Transformers Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2023, 89 (01): : 99 - 104
- [3] AN EMPIRICAL COMPARISON OF JOINT-TRAINING AND PRE-TRAINING FOR DOMAIN-AGNOSTIC SEMI-SUPERVISED LEARNING VIA ENERGY-BASED MODELS 2021 IEEE 31ST INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2021,
- [4] TNT: Text Normalization based Pre-training of Transformers for Content Moderation PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4735 - 4741
- [5] Pre-training of Graph Augmented Transformers for Medication Recommendation PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5953 - 5959
- [6] Lifting the Curse of Multilinguality by Pre-training Modular Transformers NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3479 - 3495
- [8] Unsupervised Extractive Summarization by Pre-training Hierarchical Transformers FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1784 - 1795
- [10] TUTA: Tree-based Transformers for Generally Structured Table Pre-training KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1780 - 1790