共 39 条
- [1] Probing Multi-modal Machine Translation with Pre-trained Language Model [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3689 - 3699
- [2] Cyberbullying detection on multi-modal data using pre-trained deep learning architectures [J]. INGENIERIA SOLIDARIA, 2021, 17 (03):
- [4] Fast multi-modal reuse: Co-occurrence pre-trained deep learning models [J]. Proceedings of SPIE - The International Society for Optical Engineering, 2019, 10996
- [5] Fast Multi-Modal Reuse: Co-Occurrence Pre-Trained Deep Learning Models [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2019, 2019, 10996
- [6] Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey [J]. Machine Intelligence Research, 2023, 20 : 447 - 482
- [8] MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [9] Multi-modal Segmentation with Missing MR Sequences Using Pre-trained Fusion Networks [J]. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA, DART 2019, MIL3ID 2019, 2019, 11795 : 165 - 172
- [10] Modal Consistency based Pre-Trained Multi-Model Reuse [J]. PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3287 - 3293