共 38 条
- [31] DEVLIN J, CHANG M W, LEE K, Et al., BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Proc of the Conference of the North American Chapter of the Association for Computational Linguistics (Long and Short Papers), pp. 4171-4186, (2019)
- [32] BROWN T B, MANN B, RYDER N, Et al., Language Models Are Few-Shot Learners, Proc of the 34th International Conference on Neural Information Processing Systems, pp. 1877-1901, (2020)
- [33] LI J N, SELVARAJU R, GOTMARE A D, Et al., Align Before Fuse: Vision and Language Representation Learning with Momentum Distillation
- [34] LI J N, LI D X, XIONG C M, Et al., Blip: Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation, Proceeding of the Machine Learning Research, 162, pp. 12888-12900, (2022)
- [35] HUANG Z C, ZENG Z Y, HUANG Y P, Et al., Seeing Out of the Box: End-to-End Pre-Training for Vision-Language Representation Learning, Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12971-12980, (2021)
- [36] LIU Y G, SINGH L, MNEIMNEH Z., A Comparative Analysis of Classic and Deep Learning Models for Inferring Gender and Age of Twitter Users, Proc of the 2nd International Conference on Deep Learning Theory and Applications, pp. 48-58, (2021)
- [37] WIEGMANN M, STEIN B, POTTHAST M., Celebrity Profiling, Proc of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2611-2618, (2019)
- [38] DING M, ZHOU C, YANG H X, Et al., CogLTX: Applying BERT to Long Texts, Proc of the 34th International Conference on Neural Information Processing Systems, pp. 12792-12804, (2020)