共 50 条
- [1] Pre-Training Transformers for Fingerprinting to Improve Stress Prediction in fMRI MEDICAL IMAGING WITH DEEP LEARNING, VOL 227, 2023, 227 : 212 - 234
- [2] Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 5731 - 5746
- [4] CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
- [5] CroCo v2: Improved Cross-view Completion Pre-training for Stereo Matching and Optical Flow 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17923 - 17934
- [7] Multi-view Analysis of Unregistered Medical Images Using Cross-View Transformers MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 104 - 113
- [8] Evaluation of FractalDB Pre-training with Vision Transformers Seimitsu Kogaku Kaishi/Journal of the Japan Society for Precision Engineering, 2023, 89 (01): : 99 - 104
- [9] Pre-training of Graph Augmented Transformers for Medication Recommendation PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5953 - 5959
- [10] Lifting the Curse of Multilinguality by Pre-training Modular Transformers NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 3479 - 3495