UniFormer: Unifying Convolution and Self-Attention for Visual Recognition

被引:99
|
作者
Li, Kunchang [1 ,2 ]
Wang, Yali [1 ,4 ]
Zhang, Junhao [3 ]
Gao, Peng [4 ]
Song, Guanglu [5 ]
Liu, Yu [5 ]
Li, Hongsheng [6 ]
Qiao, Yu [1 ,4 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, ShenZhen Key Lab Comp Vis & Pattern Recognit, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Natl Univ Singapore, Singapore 119077, Singapore
[4] Shanghai Artificial Intelligence Lab, Shanghai 200232, Peoples R China
[5] SenseTime Res, Shanghai 200233, Peoples R China
[6] Chinese Univ Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
UniFormer; convolution neural network; transformer; self-attention; visual recognition;
D O I
10.1109/TPAMI.2023.3282631
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is a challenging task to learn discriminative representation from images and videos, due to large local redundancy and complex global dependency in these visual data. Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years. Though CNNs can efficiently decrease local redundancy by convolution within a small neighborhood, the limited receptive field makes it hard to capture global dependency. Alternatively, ViTs can effectively capture long-range dependency via self-attention, while blind similarity comparisons among all the tokens lead to high redundancy. To resolve these problems, we propose a novel Unified transFormer (UniFormer), which can seamlessly integrate the merits of convolution and self-attention in a concise transformer format. Different from the typical transformer blocks, the relation aggregators in our UniFormer block are equipped with local and global token affinity respectively in shallow and deep layers, allowing tackling both redundancy and dependency for efficient and effective representation learning. Finally, we flexibly stack our blocks into a new powerful backbone, and adopt it for various vision tasks from image to video domain, from classification to dense prediction. Without any extra training data, our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1 K classification task. With only ImageNet-1 K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. It obtains 82.9/84.8 top-1 accuracy on Kinetics-400/600, 60.9/71.2 top-1 accuracy on Something-Something V1/V2 video classification tasks, 53.8 box AP and 46.4 mask AP on COCO object detection task, 50.8 mIoU on ADE20 K semantic segmentation task, and 77.4 AP on COCO pose estimation task. Moreover, we build an efficient UniFormer with a concise hourglass design of token shrinking and recovering, which achieves 2-4xhigher throughput than the recent lightweight models.
引用
收藏
页码:12581 / 12600
页数:20
相关论文
共 50 条
  • [21] Multiscale Temporal Self-Attention and Dynamical Graph Convolution Hybrid Network for EEG-Based Stereogram Recognition
    Shen, Lili
    Sun, Mingyang
    Li, Qunxia
    Li, Beichen
    Pan, Zhaoqing
    Lei, Jianjun
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2022, 30 : 1191 - 1202
  • [22] Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition
    Dalian University of Technology, Department of Computer Science and Technology, Dalian
    116024, China
    不详
    313000, China
    IEEE Trans. Neural Networks Learn. Sys., 2162, 12 (18565-18575):
  • [23] Self-Attention and MLP Auxiliary Convolution for Face Anti-Spoofing
    Gu, Hanqing
    Chen, Jiayin
    Xiao, Fusu
    Zhang, Yi-Jia
    Lu, Zhe-Ming
    IEEE ACCESS, 2023, 11 : 131152 - 131167
  • [24] A Speech Recognition Model Building Method Combined Dynamic Convolution and Multi-Head Self-Attention Mechanism
    Liu, Wei
    Sun, Jiaming
    Sun, Yiming
    Chen, Chunyi
    ELECTRONICS, 2022, 11 (10)
  • [25] The Contrastive Network With Convolution and Self-Attention Mechanisms for Unsupervised Cell Segmentation
    Zhao, Yuhang
    Shao, Xianhao
    Chen, Cai
    Song, Junlin
    Tian, Chongxuan
    Li, Wei
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (12) : 5837 - 5847
  • [26] Self-Attention Graph Convolution Residual Network for Traffic Data Completion
    Zhang, Yong
    Wei, Xiulan
    Zhang, Xinyu
    Hu, Yongli
    Yin, Baocai
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (02) : 528 - 541
  • [27] Local self-attention in transformer for visual question answering
    Shen, Xiang
    Han, Dezhi
    Guo, Zihan
    Chen, Chongqing
    Hua, Jie
    Luo, Gaofeng
    APPLIED INTELLIGENCE, 2023, 53 (13) : 16706 - 16723
  • [28] Hybrid Network Using Dynamic Graph Convolution and Temporal Self-Attention for EEG-Based Emotion Recognition
    Cheng, Cheng
    Yu, Zikang
    Zhang, Yong
    Feng, Lin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 11
  • [29] Fusing Convolution and Self-Attention Parallel in Frequency Domain for Image Deblurring
    Huang, Xuandong
    He, JingSong
    NEURAL PROCESSING LETTERS, 2023, 55 (07) : 9811 - 9829
  • [30] Fusing Convolution and Self-Attention Parallel in Frequency Domain for Image Deblurring
    Xuandong Huang
    JingSong He
    Neural Processing Letters, 2023, 55 : 9811 - 9829