HIST: Hierarchical and sequential transformer for image captioning

被引:0
|
作者
Lv, Feixiao [1 ,2 ]
Wang, Rui [1 ,2 ]
Jing, Lihua [1 ,2 ]
Dai, Pengwen [3 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyberspace Secur, Beijing, Peoples R China
[3] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; feature extraction; learning (artificial intelligence); neural nets;
D O I
10.1049/cvi2.12305
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image captioning aims to automatically generate a natural language description of a given image, and most state-of-the-art models have adopted an encoder-decoder transformer framework. Such transformer structures, however, show two main limitations in the task of image captioning. Firstly, the traditional transformer obtains high-level fusion features to decode while ignoring other-level features, resulting in losses of image content. Secondly, the transformer is weak in modelling the natural order characteristics of language. To address theseissues, the authors propose a HIerarchical and Sequential Transformer (HIST) structure, which forces each layer of the encoder and decoder to focus on features of different granularities, and strengthen the sequentially semantic information. Specifically, to capture the details of different levels of features in the image, the authors combine the visual features of multiple regions and divide them into multiple levels differently. In addition, to enhance the sequential information, the sequential enhancement module in each decoder layer block extracts different levels of features for sequentially semantic extraction and expression. Extensive experiments on the public datasets MS-COCO and Flickr30k have demonstrated the effectiveness of our proposed method, and show that the authors' method outperforms most of previous state of the arts. The authors propose hierarchical encoder-decoder blocks in the authors' novel hierarchical and sequential transformer for capturing multi-granularity image information and combining it with a sequential enhancement module to generate rich and smooth image descriptions. The authors's method demonstrated good performance by comparing it with numerous SOTA methods on the MSCOCO dataset. image
引用
收藏
页码:1043 / 1056
页数:14
相关论文
共 50 条
  • [31] SPT: Spatial Pyramid Transformer for Image Captioning
    Zhang, Haonan
    Zeng, Pengpeng
    Gao, Lianli
    Lyu, Xinyu
    Song, Jingkuan
    Shen, Heng Tao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4829 - 4842
  • [32] Position-guided transformer for image captioning
    Hu, Juntao
    Yang, You
    Yao, Lu
    An, Yongzhi
    Pan, Longyue
    IMAGE AND VISION COMPUTING, 2022, 128
  • [33] Input enhanced asymmetric transformer for image captioning
    Zhu, Chenhao
    Ye, Xia
    Lu, Qiduo
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 1419 - 1427
  • [34] Improved Transformer with Parallel Encoders for Image Captioning
    Lou, Liangshan
    Lu, Ke
    Xue, Jian
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 4072 - 4078
  • [35] Semi-Autoregressive Transformer for Image Captioning
    Zhou, Yuanen
    Zhang, Yong
    Hu, Zhenzhen
    Wang, Meng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3132 - 3136
  • [36] A SEQUENTIAL GUIDING NETWORK WITH ATTENTION FOR IMAGE CAPTIONING
    Sow, Daouda
    Qin, Zengchang
    Niasse, Mouhamed
    Wan, Tao
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3802 - 3806
  • [37] Hierarchical Deep Neural Network for Image Captioning
    Su, Yuting
    Li, Yuqian
    Xu, Ning
    Liu, An-An
    NEURAL PROCESSING LETTERS, 2020, 52 (02) : 1057 - 1067
  • [38] Hierarchical decoding with latent context for image captioning
    Jing Zhang
    Yingshuai Xie
    Kangkang Li
    Zhe Wang
    Wen Du
    Neural Computing and Applications, 2023, 35 : 2429 - 2442
  • [39] Hierarchical Deep Neural Network for Image Captioning
    Yuting Su
    Yuqian Li
    Ning Xu
    An-An Liu
    Neural Processing Letters, 2020, 52 : 1057 - 1067
  • [40] Hierarchical decoding with latent context for image captioning
    Zhang, Jing
    Xie, Yingshuai
    Li, Kangkang
    Wang, Zhe
    Du, Wen
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (03): : 2429 - 2442