HIST: Hierarchical and sequential transformer for image captioning

被引:0
|
作者
Lv, Feixiao [1 ,2 ]
Wang, Rui [1 ,2 ]
Jing, Lihua [1 ,2 ]
Dai, Pengwen [3 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyberspace Secur, Beijing, Peoples R China
[3] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; feature extraction; learning (artificial intelligence); neural nets;
D O I
10.1049/cvi2.12305
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder transformer framework. Such transformer structures, however, show two main limitations in the task of image captioning. Firstly, the traditional transformer obtains high-level fusion features to decode while ignoring other-level features, resulting in losses of image content. Secondly, the transformer is weak in modelling the natural order characteristics of language. To address theseissues, the authors propose a HIerarchical and Sequential Transformer (HIST) structure, which forces each layer of the encoder and decoder to focus on features of different granularities, and strengthen the sequentially semantic information. Specifically, to capture the details of different levels of features in the image, the authors combine the visual features of multiple regions and divide them into multiple levels differently. In addition, to enhance the sequential information, the sequential enhancement module in each decoder layer block extracts different levels of features for sequentially semantic extraction and expression. Extensive experiments on the public datasets MS-COCO and Flickr30k have demonstrated the effectiveness of our proposed method, and show that the authors' method outperforms most of previous state of the arts. The authors propose hierarchical encoder-decoder blocks in the authors' novel hierarchical and sequential transformer for capturing multi-granularity image information and combining it with a sequential enhancement module to generate rich and smooth image descriptions. The authors's method demonstrated good performance by comparing it with numerous SOTA methods on the MSCOCO dataset. image
引用
收藏
页码:1043 / 1056
页数:14
相关论文
共 50 条
  • [21] Context-aware transformer for image captioning
    Yang, Xin
    Wang, Ying
    Chen, Haishun
    Li, Jie
    Huang, Tingting
    NEUROCOMPUTING, 2023, 549
  • [22] A Position-Aware Transformer for Image Captioning
    Deng, Zelin
    Zhou, Bo
    He, Pei
    Huang, Jianfeng
    Alfarraj, Osama
    Tolba, Amr
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (01): : 2065 - 2081
  • [23] Full-Memory Transformer for Image Captioning
    Lu, Tongwei
    Wang, Jiarong
    Min, Fen
    SYMMETRY-BASEL, 2023, 15 (01):
  • [24] A position-aware transformer for image captioning
    Deng, Zelin
    Zhou, Bo
    He, Pei
    Huang, Jianfeng
    Alfarraj, Osama
    Tolba, Amr
    Deng, Zelin (zl_deng@sina.com), 2005, Tech Science Press (70): : 2005 - 2021
  • [25] Retrieval-Augmented Transformer for Image Captioning
    Sarto, Sara
    Cornia, Marcella
    Baraldi, Lorenzo
    Cucchiara, Rita
    19TH INTERNATIONAL CONFERENCE ON CONTENT-BASED MULTIMEDIA INDEXING, CBMI 2022, 2022, : 1 - 7
  • [26] Input enhanced asymmetric transformer for image captioning
    Chenhao Zhu
    Xia Ye
    Qiduo Lu
    Signal, Image and Video Processing, 2023, 17 : 1419 - 1427
  • [27] Dual Global Enhanced Transformer for image captioning
    Xian, Tiantao
    Li, Zhixin
    Zhang, Canlong
    Ma, Huifang
    NEURAL NETWORKS, 2022, 148 : 129 - 141
  • [28] Attention-Aligned Transformer for Image Captioning
    Fei, Zhengcong
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 607 - 615
  • [29] Context-assisted Transformer for Image Captioning
    Lian Z.
    Wang R.
    Li H.-C.
    Yao H.
    Hu X.-H.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (09): : 1889 - 1903
  • [30] Dual Position Relationship Transformer for Image Captioning
    Wang, Yaohan
    Qian, Wenhua
    Nie, Rencan
    Xu, Dan
    Cao, Jinde
    Kim, Pyoungwon
    BIG DATA, 2022, 10 (06) : 515 - 527