共 50 条
- [2] Universal Graph Transformer Self-Attention Networks [J]. COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 193 - 196
- [5] Lite Vision Transformer with Enhanced Self-Attention [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 11988 - 11998
- [6] Synthesizer: Rethinking Self-Attention for Transformer Models [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7192 - 7203
- [7] Self-Attention Mechanism in GANs for Molecule Generation [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 57 - 60
- [8] Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 2082 - 2091
- [9] Local self-attention in transformer for visual question answering [J]. APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
- [10] Tree Transformer: Integrating Tree Structures into Self-Attention [J]. 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 1061 - 1070