共 32 条
- [21] VASWANI A, SHAZEER N, PARMAR N, Et al., Attention is all you need, Proc. of the Advances in neural Information Processing Systems, pp. 5998-6008, (2017)
- [22] FEDUS W, ZOPH B, SHAZEER N., Switch transformers: scaling to trillion parameter models with simple and efficient sparsity
- [23] PRANGEMEIER T, REICH C, KOEPPL H., Attention-based transformers for instance segmentation of cells in microstructures, Proc. of the IEEE International Conference on Bioinformatics and Biomedicine, pp. 700-707, (2020)
- [24] YANG F, YANG H, FU J, Et al., Learning texture transformer network for image super-resolution, Proc. of the Computer Vision and Pattern Ecognition, (2020)
- [25] CHEN H, WANG Y, GUO T, Et al., Pre-trained image processing transformer
- [26] ZHANG H, GOODFELLOW I, METAXAS D, Et al., Self attention generative adversarial networks, Proc. of the International Conference on Machine Learning, pp. 7354-7363, (2019)
- [27] CHEN C F, FAN Q, PANDA R., Crossvit: cross-attention multiscale vision transformer for image classification
- [28] CHEN C F, PANDA R, FAN Q., Regionvit: regional-to-local attention for vision transformers
- [29] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, Et al., An image is worth 16×16 words: transformers for image recognition at scale, Proc. of the Computer Vision and Pattern Recongnition, (2021)
- [30] CARION N, MASSA F, SYNNAEVE G, Et al., End-to-end object detection with transformers, Proc. of the European Conference on Computer Vision, (2020)