Enhanced transformer model for video caption generation

被引:0
|
作者
Varma, Soumya [1 ]
Peter, J. Dinesh [1 ]
机构
[1] Karunya Inst Technol & Sci, Dept CSE, Coimbatore, Tamil Nadu, India
关键词
curriculum learning; deep learning; encoder-decoder; engineering applications; neural networks; transformer; video caption;
D O I
10.1111/exsy.13392
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic Video captioning system is a method of describing the content in a video by analysing its visual aspects with regard to space and time and producing a meaningful caption that explains the video. A decade of research in this area has resulted in a steep growth in the quality and appropriateness of the generated caption compared with the expected result. The research has been driven from the very basic method to most advanced transformer method. Machine generated caption of a video must be adhering to many expected standards. For humans, this task may be a trivial one, however its not the same for a machine to analyse the content and generate a semantically coherent description for it. The caption which is generated in a natural language must also adhere to its lexical and syntactical structure. The video captioning process is a culmination of computer vision and natural language processing tasks. Commencing with template based conventional approach, it has surpassed statistical method, traditional deep learning approaches and is now in the trend of using transformers. This work made an extensive study of the literature and has proposed an improved transformer-based architecture for video captioning process. The transformer architecture made use of an encoder and decoder model that has two and three sublayers respectively. Multi-head self attention and cross attention are part of the model which bring about very beneficial results. The decoder is auto-regressive and uses a masked layer to prevent the model from foreseeing future words in the caption. An enhanced encoder-decoder Transformer model with CNN for feature extraction has been used in our work. This model captures the long-range dependencies and temporal relationships more effectively. The model has been evaluated with benchmark datasets and compared with state-of-the-art methods and found to be slightly better in the performance. The performance scores are slightly varying for BLEU, METEOR, ROUGE and CIDEr. Furthermore, we propose the idea of curriculum learning if incorporated can improve the results again.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Image Caption Generation With Adaptive Transformer
    Zhang, Wei
    Nie, Wenbo
    Li, Xinle
    Yu, Yao
    2019 34RD YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION (YAC), 2019, : 521 - 526
  • [2] A Multimodal Framework for Video Caption Generation
    Bhooshan, Reshmi S.
    Suresh, K.
    IEEE Access, 2022, 10 : 92166 - 92176
  • [3] A Multimodal Framework for Video Caption Generation
    Bhooshan, Reshmi S.
    Suresh, K.
    IEEE ACCESS, 2022, 10 : 92166 - 92176
  • [4] Transformer based image caption generation for news articles
    Pande, Ashtavinayak
    Pandey, Atul
    Solanki, Ayush
    Shanbhag, Chinmay
    Motghare, Manish
    INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, 2023, 14 (01):
  • [5] A transformer-based Urdu image caption generation
    Hadi M.
    Safder I.
    Waheed H.
    Zaman F.
    Aljohani N.R.
    Nawaz R.
    Hassan S.U.
    Sarwar R.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (9) : 3441 - 3457
  • [6] A long video caption generation algorithm for big video data retrieval
    Ding, Songtao
    Qu, Shiru
    Xi, Yuling
    Wan, Shaohua
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2019, 93 : 583 - 595
  • [7] A Robust Endpoint Detection Algorithm for Video Caption Generation
    Li, Qi
    Ma, Huadong
    Feng, Shuo
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE FOR YOUNG COMPUTER SCIENTISTS, VOLS 1-5, 2008, : 942 - 946
  • [8] A Novel Image Caption Model Based on Transformer Structure
    Wang, Shuang
    Zhu, Yaping
    2021 IEEE INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING (ICICSE 2021), 2021, : 144 - 148
  • [9] Integrating Both Visual and Audio Cues for Enhanced Video Caption
    Hao, Wangli
    Zhang, Zhaoxiang
    Guan, He
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6894 - 6901
  • [10] Remote sensing image caption generation via transformer and reinforcement learning
    Shen, Xiangqing
    Liu, Bing
    Zhou, Yong
    Zhao, Jiaqi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (35-36) : 26661 - 26682