GRIT: Faster and Better Image Captioning Transformer Using Dual Visual Features

被引:60
|
作者
Van-Quang Nguyen [1 ]
Suganuma, Masanori [1 ,2 ]
Okatani, Takayuki [1 ,2 ]
机构
[1] Tohoku Univ, Grad Sch Informat Sci, Sendai, Miyagi, Japan
[2] RIKEN, Ctr AIP, Tokyo, Japan
来源
关键词
Image captioning; Grid features; Region features;
D O I
10.1007/978-3-031-20059-5_10
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current state-of-the-art methods for image captioning employ region-based features, as they provide object-level information that is essential to describe the content of images; they are usually extracted by an object detector such as Faster R-CNN. However, they have several issues, such as lack of contextual information, the risk of inaccurate detection, and the high computational cost. The first two could be resolved by additionally using grid-based features. However, how to extract and fuse these two types of features is uncharted. This paper proposes a Transformer-only neural architecture, dubbed GRIT (Grid- and Region-based Image captioning Transformer), that effectively utilizes the two visual features to generate better captions. GRIT replaces the CNN-based detector employed in previous methods with a DETR-based one, making it computationally faster. Moreover, its monolithic design consisting only of Transformers enables end-to-end training of the model. This innovative design and the integration of the dual visual features bring about significant performance improvement. The experimental results on several image captioning benchmarks show that GRIT outperforms previous methods in inference accuracy and speed.
引用
收藏
页码:167 / 184
页数:18
相关论文
共 50 条
  • [21] Dual Graph Convolutional Networks with Transformer and Curriculum Learning for Image Captioning
    Dong, Xinzhi
    Long, Chengjiang
    Xu, Wenju
    Xiao, Chunxia
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2615 - 2624
  • [22] Improving Remote Sensing Image Captioning by Combining Grid Features and Transformer
    Zhuang, Shuo
    Wang, Ping
    Wang, Gang
    Wang, Di
    Chen, Jinyong
    Gao, Feng
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [23] GRPIC: an end-to-end image captioning model using three visual features
    Peng, Shixin
    Xiong, Can
    Liu, Leyuan
    Yang, Laurence T.
    Chen, Jingying
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (03) : 1559 - 1572
  • [24] Image Captioning Based on Visual Relevance and Context Dual Attention
    Liu M.-F.
    Shi Q.
    Nie L.-Q.
    Ruan Jian Xue Bao/Journal of Software, 2022, 33 (09):
  • [25] Dual Transformer Decoder based Features Fusion Network for Automated Audio Captioning
    Sun, Jianyuan
    Liu, Xubo
    Mei, Xinhao
    Kilic, Volkan
    Plumbley, Mark D.
    Wang, Wenwu
    INTERSPEECH 2023, 2023, : 4164 - 4168
  • [26] Transformer with multi-level grid features and depth pooling for image captioning
    Bui, Doanh C.
    Nguyen, Tam V.
    Nguyen, Khang
    MACHINE VISION AND APPLICATIONS, 2024, 35 (05)
  • [27] A Dual-Feature-Based Adaptive Shared Transformer Network for Image Captioning
    Shi, Yinbin
    Xia, Ji
    Zhou, MengChu
    Cao, Zhengcai
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 13
  • [28] A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection
    Thobhani, Alaa
    Zou, Beiji
    Kui, Xiaoyan
    Abdussalam, Amr
    Asim, Muhammad
    Ahmed, Naveed
    Alshara, Mohammed Ali
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 81 (02): : 2873 - 2894
  • [29] Image captioning in Bengali language using visual attention
    Masud, Adiba
    Hosen, Md. Biplob
    Habibullah, Md.
    Anannya, Mehrin
    Kaiser, M. Shamim
    PLOS ONE, 2025, 20 (02):
  • [30] Matching Visual Features to Hierarchical Semantic Topics for Image Paragraph Captioning
    Dandan Guo
    Ruiying Lu
    Bo Chen
    Zequn Zeng
    Mingyuan Zhou
    International Journal of Computer Vision, 2022, 130 : 1920 - 1937