Contextual Transformer Networks for Visual Recognition

被引:328
|
作者
Li, Yehao [1 ]
Yao, Ting [1 ]
Pan, Yingwei [1 ]
Mei, Tao [1 ]
机构
[1] JD Explore Acad, Beijing 101111, Peoples R China
基金
国家重点研发计划;
关键词
Transformers; Convolution; Visualization; Computer architecture; Task analysis; Image recognition; Object detection; Transformer; self-attention; vision transformer; image recognition;
D O I
10.1109/TPAMI.2022.3164083
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer with self-attention has led to the revolutionizing of natural language processing field, and recently inspires the emergence of Transformer-style architecture design with competitive results in numerous computer vision tasks. Nevertheless, most of existing designs directly employ self-attention over a 2D feature map to obtain the attention matrix based on pairs of isolated queries and keys at each spatial location, but leave the rich contexts among neighbor keys under-exploited. In this work, we design a novel Transformer-style module, i.e., Contextual Transformer (CoT) block, for visual recognition. Such design fully capitalizes on the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthens the capacity of visual representation. Technically, CoT block first contextually encodes input keys via a 3 x 3 convolution, leading to a static contextual representation of inputs. We further concatenate the encoded keys with input queries to learn the dynamic multi-head attention matrix through two consecutive 1 x 1 convolutions. The learnt attention matrix is multiplied by input values to achieve the dynamic contextual representation of inputs. The fusion of the static and dynamic contextual representations are finally taken as outputs. Our CoT block is appealing in the view that it can readily replace each 3 x 3 convolution in ResNet architectures, yielding a Transformer-style backbone named as Contextual Transformer Networks (CoTNet). Through extensive experiments over a wide range of applications (e.g., image recognition, object detection, instance segmentation, and semantic segmentation), we validate the superiority of CoTNet as a stronger backbone.
引用
收藏
页码:1489 / 1500
页数:12
相关论文
共 50 条
  • [1] Micro-expression recognition based on contextual transformer networks
    Yang, Jun
    Wu, Zilu
    Wu, Renbiao
    VISUAL COMPUTER, 2025, 41 (03): : 1527 - 1541
  • [2] Visual Speech Recognition in Natural Scenes Based on Spatial Transformer Networks
    Yu, Jin
    Wang, Shilin
    2020 IEEE 14TH INTERNATIONAL CONFERENCE ON ANTI-COUNTERFEITING, SECURITY, AND IDENTIFICATION (ASID), 2020, : 1 - 5
  • [3] Speech recognition by integrating audio, visual and contextual features based on neural networks
    Kim, MW
    Ryu, JW
    Kim, EJ
    ADVANCES IN NATURAL COMPUTATION, PT 2, PROCEEDINGS, 2005, 3611 : 155 - 164
  • [4] Contextual effects on visual word recognition
    Gonzalez-Garrido, Andres A.
    Gomez-Velazquez, Fabiola R.
    Rodriguez-Santillan, Elizabeth
    Zarabozo-Hurtado, Daniel
    PSYCHOPHYSIOLOGY, 2008, 45 : S90 - S90
  • [5] Visual contextual relationship augmented transformer for image captioning
    Su, Qiang
    Hu, Junbo
    Li, Zhixin
    APPLIED INTELLIGENCE, 2024, 54 (06) : 4794 - 4813
  • [6] Visual contextual relationship augmented transformer for image captioning
    Qiang Su
    Junbo Hu
    Zhixin Li
    Applied Intelligence, 2024, 54 : 4794 - 4813
  • [7] ResT: An Efficient Transformer for Visual Recognition
    Zhang, Qing-Long
    Yang, Yu -Bin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] VISUAL TEXT RECOGNITION THROUGH CONTEXTUAL PROCESSING
    SINHA, RMK
    PRASADA, B
    PATTERN RECOGNITION, 1988, 21 (05) : 463 - 479
  • [9] Contextual Debiasing for Visual Recognition with Causal Mechanisms
    Liu, Ruyang
    Liu, Hao
    Li, Ge
    Hou, Haodi
    Yu, TingHao
    Yang, Tao
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12745 - 12755
  • [10] Attention-Guided Spatial Transformer Networks for Fine-Grained Visual Recognition
    Liu, Dichao
    Wang, Yu
    Kato, Jien
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2019, E102D (12) : 2577 - 2586