UAT: Universal Attention Transformer for Video Captioning

被引:3
|
作者
Im, Heeju [1 ]
Choi, Yong-Suk [2 ]
机构
[1] Hanyang Univ, Dept Artificial Intelligence, Seoul 04763, South Korea
[2] Hanyang Univ, Dept Comp Sci & Engn, Seoul 04763, South Korea
基金
新加坡国家研究基金会;
关键词
video captioning; transformer; end-to-end learning;
D O I
10.3390/s22134817
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Video captioning via encoder-decoder structures is a successful sentence generation method. In addition, using various feature extraction networks for extracting multiple features to obtain multiple kinds of visual features in the encoding process is a standard method for improving model performance. Such feature extraction networks are weight-freezing states and are based on convolution neural networks (CNNs). However, these traditional feature extraction methods have some problems. First, when the feature extraction model is used in conjunction with freezing, additional learning of the feature extraction model is not possible by exploiting the backpropagation of the loss obtained from the video captioning training. Specifically, this blocks feature extraction models from learning more about spatial information. Second, the complexity of the model is further increased when multiple CNNs are used. Additionally, the author of Vision Transformers (ViTs) pointed out the inductive bias of CNN called the local receptive field. Therefore, we propose the full transformer structure that uses an end-to-end learning method for video captioning to overcome this problem. As a feature extraction model, we use a vision transformer (ViT) and propose feature extraction gates (FEGs) to enrich the input of the captioning model through that extraction model. Additionally, we design a universal encoder attraction (UEA) that uses all encoder layer outputs and performs self-attention on the outputs. The UEA is used to address the lack of information about the video's temporal relationship because our method uses only the appearance feature. We will evaluate our model against several recent models on two benchmark datasets and show its competitive performance on MSRVTT/MSVD datasets. We show that the proposed model performed captioning using only a single feature, but in some cases, it was better than the others, which used several features.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Multimodal attention-based transformer for video captioning
    Hemalatha Munusamy
    Chandra Sekhar C
    [J]. Applied Intelligence, 2023, 53 : 23349 - 23368
  • [2] Multimodal attention-based transformer for video captioning
    Munusamy, Hemalatha
    Sekhar, C. Chandra
    [J]. APPLIED INTELLIGENCE, 2023, 53 (20) : 23349 - 23368
  • [3] Exploring adaptive attention in memory transformer applied to coherent video paragraph captioning
    Cardoso, Leonardo Vilela
    Guimaraes, Silvio Jamil F.
    Patrocinio, Zenilton K. G., Jr.
    [J]. 2022 IEEE EIGHTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2022), 2022, : 37 - 44
  • [4] Captioning Transformer with Stacked Attention Modules
    Zhu, Xinxin
    Li, Lixiang
    Liu, Jing
    Peng, Haipeng
    Niu, Xinxin
    [J]. APPLIED SCIENCES-BASEL, 2018, 8 (05):
  • [5] Attention-Aligned Transformer for Image Captioning
    Fei, Zhengcong
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 607 - 615
  • [6] Bidirectional transformer with knowledge graph for video captioning
    Zhong, Maosheng
    Chen, Youde
    Zhang, Hao
    Xiong, Hao
    Wang, Zhixiang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (20) : 58309 - 58328
  • [7] Accelerated masked transformer for dense video captioning
    Yu, Zhou
    Han, Nanjia
    [J]. NEUROCOMPUTING, 2021, 445 : 72 - 80
  • [8] Dense video captioning based on local attention
    Qian, Yong
    Mao, Yingchi
    Chen, Zhihao
    Li, Chang
    Bloh, Olano Teah
    Huang, Qian
    [J]. IET IMAGE PROCESSING, 2023, 17 (09) : 2673 - 2685
  • [9] Video captioning with global and local text attention
    Yuqing Peng
    Chenxi Wang
    Yixin Pei
    Yingjun Li
    [J]. The Visual Computer, 2022, 38 : 4267 - 4278
  • [10] Contextual Attention Network for Emotional Video Captioning
    Song, Peipei
    Guo, Dan
    Cheng, Jun
    Wang, Meng
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1858 - 1867