共 50 条
- [41] Learning Attention from Attention: Efficient Self-Refinement Transformer for Face Super-Resolution PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 1035 - 1043
- [42] Global voxel transformer networks for augmented microscopy Nature Machine Intelligence, 2021, 3 : 161 - 171
- [44] Spatiotemporal Transformer Attention Network for 3D Voxel Level Joint Segmentation and Motion Prediction in Point Cloud 2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 1381 - 1386
- [46] Efficient Visual Tracking Using Local Information Patch Attention Free Transformer 2022 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN, IEEE ICCE-TW 2022, 2022, : 447 - 448
- [48] Layer-wise Pruning of Transformer Attention Heads for Efficient Language Modeling 18TH INTERNATIONAL SOC DESIGN CONFERENCE 2021 (ISOCC 2021), 2021, : 357 - 358
- [49] EGFormer: An Enhanced Transformer Model with Efficient Attention Mechanism for Traffic Flow Forecasting VEHICLES, 2024, 6 (01): : 120 - 139
- [50] DARKER: : Efficient Transformer with Data-driven Attention Mechanism for Time Series PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (11): : 3229 - 3242