Rotary Transformer for Image Captioning

被引:0
|
作者
Qiu, Yile [1 ]
Zhu, Li [1 ]
机构
[1] Xi An Jiao Tong Univ, Software Engn, Xian 710049, Shaanxi, Peoples R China
关键词
Image captioning; Transformer; sequence-to-sequence; RoPE;
D O I
10.1117/12.2644069
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Image captioning tasks based on deep learning encompasses two major domains . computer vision and natural language processing. The Transformer architecture has achieved leading performance in the field of natural language processing, There have been studies using Transformer in image caption encoder and decoder, the results proving better performance compared to previous solutions. Positional encoding is an essential part in Transformer. Rotary Transformer proposed Rotary Position Embedding (RoPE), has achieved comparable or superior performance on various language modeling tasks. Limited work has been done to adapt the Roformer's architecture to image captioning tasks. The study conduct research based on the positional encoding of Transformer architecture, our proposed model consists of modified Roformer as an encoder and BERT as a decoder. With extracted feature as inputs as well as some training tricks, our model achieves similar or better performance on MSCOCO dataset compared to "CNN+RNN" models and regular transformer solutions.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Distance Transformer for Image Captioning
    Wang, Jiarong
    Lu, Tongwei
    Liu, Xuanxuan
    Yang, Qi
    [J]. 2021 4TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION ENGINEERING (RCAE 2021), 2021, : 73 - 76
  • [2] Entangled Transformer for Image Captioning
    Li, Guang
    Zhu, Linchao
    Liu, Ping
    Yang, Yi
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8927 - 8936
  • [3] Boosted Transformer for Image Captioning
    Li, Jiangyun
    Yao, Peng
    Guo, Longteng
    Zhang, Weicun
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (16):
  • [4] Complementary Shifted Transformer for Image Captioning
    Liu, Yanbo
    Yang, You
    Xiang, Ruoyu
    Ma, Jixin
    [J]. NEURAL PROCESSING LETTERS, 2023, 55 (06) : 8339 - 8363
  • [5] Reinforced Transformer for Medical Image Captioning
    Xiong, Yuxuan
    Du, Bo
    Yan, Pingkun
    [J]. MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 673 - 680
  • [6] ReFormer: The Relational Transformer for Image Captioning
    Yang, Xuewen
    Liu, Yingru
    Wang, Xin
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5398 - 5406
  • [7] Transformer with a Parallel Decoder for Image Captioning
    Wei, Peilang
    Liu, Xu
    Luo, Jun
    Pu, Huayan
    Huang, Xiaoxu
    Wang, Shilong
    Cao, Huajun
    Yang, Shouhong
    Zhuang, Xu
    Wang, Jason
    Yue, Hong
    Ji, Cheng
    Zhou, Mingliang
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (01)
  • [8] Image captioning with transformer and knowledge graph
    Zhang, Yu
    Shi, Xinyu
    Mi, Siya
    Yang, Xu
    [J]. PATTERN RECOGNITION LETTERS, 2021, 143 : 43 - 49
  • [9] Complementary Shifted Transformer for Image Captioning
    Yanbo Liu
    You Yang
    Ruoyu Xiang
    Jixin Ma
    [J]. Neural Processing Letters, 2023, 55 : 8339 - 8363
  • [10] ETransCap: efficient transformer for image captioning
    Mundu, Albert
    Singh, Satish Kumar
    Dubey, Shiv Ram
    [J]. APPLIED INTELLIGENCE, 2024, 54 (21) : 10748 - 10762