共 50 条
- [2] MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [4] MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained Vision-Language Models [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 25 - 30
- [5] Probing Multi-modal Machine Translation with Pre-trained Language Model [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3689 - 3699
- [7] Fast multi-modal reuse: Co-occurrence pre-trained deep learning models [J]. Proceedings of SPIE - The International Society for Optical Engineering, 2019, 10996
- [8] Fast Multi-Modal Reuse: Co-Occurrence Pre-Trained Deep Learning Models [J]. REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2019, 2019, 10996
- [9] Difference between Multi-modal vs. Text Pre-trained Models in Embedding Text [J]. Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2023, 59 (01): : 48 - 56