Enhanced transformer model for video caption generation

被引:0
|
作者
Varma, Soumya [1 ]
Peter, J. Dinesh [1 ]
机构
[1] Karunya Inst Technol & Sci, Dept CSE, Coimbatore, Tamil Nadu, India
关键词
curriculum learning; deep learning; encoder-decoder; engineering applications; neural networks; transformer; video caption;
D O I
10.1111/exsy.13392
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic Video captioning system is a method of describing the content in a video by analysing its visual aspects with regard to space and time and producing a meaningful caption that explains the video. A decade of research in this area has resulted in a steep growth in the quality and appropriateness of the generated caption compared with the expected result. The research has been driven from the very basic method to most advanced transformer method. Machine generated caption of a video must be adhering to many expected standards. For humans, this task may be a trivial one, however its not the same for a machine to analyse the content and generate a semantically coherent description for it. The caption which is generated in a natural language must also adhere to its lexical and syntactical structure. The video captioning process is a culmination of computer vision and natural language processing tasks. Commencing with template based conventional approach, it has surpassed statistical method, traditional deep learning approaches and is now in the trend of using transformers. This work made an extensive study of the literature and has proposed an improved transformer-based architecture for video captioning process. The transformer architecture made use of an encoder and decoder model that has two and three sublayers respectively. Multi-head self attention and cross attention are part of the model which bring about very beneficial results. The decoder is auto-regressive and uses a masked layer to prevent the model from foreseeing future words in the caption. An enhanced encoder-decoder Transformer model with CNN for feature extraction has been used in our work. This model captures the long-range dependencies and temporal relationships more effectively. The model has been evaluated with benchmark datasets and compared with state-of-the-art methods and found to be slightly better in the performance. The performance scores are slightly varying for BLEU, METEOR, ROUGE and CIDEr. Furthermore, we propose the idea of curriculum learning if incorporated can improve the results again.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Trends in Event Understanding and Caption Generation/Reconstruction in Dense Video: A Review
    Ekanayake, Ekanayake Mudiyanselage Chulabhaya Lankanatha
    Gezawa, Abubakar Sulaiman
    Lei, Yunqi
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 78 (03): : 2941 - 2965
  • [22] Context Aware Video Caption Generation with Consecutive Differentiable Neural Computer
    Kim, Jonghong
    Choi, Inchul
    Lee, Minho
    ELECTRONICS, 2020, 9 (07) : 1 - 15
  • [23] Image Caption Generation using Deep Learning For Video Summarization Applications
    Inayathulla, Mohammed
    Karthikeyan, C.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (01) : 565 - 572
  • [24] CLIP4Caption: CLIP for Video Caption
    Tang, Mingkang
    Wang, Zhanyu
    Liu, Zhenhua
    Rao, Fengyun
    Li, Dian
    Li, Xiu
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4858 - 4862
  • [25] Transformer model incorporating local graph semantic attention for image caption
    Qian, Kui
    Pan, Yuchen
    Xu, Hao
    Tian, Lei
    VISUAL COMPUTER, 2024, 40 (09): : 6533 - 6544
  • [26] Mobilenet V3-transformer, a lightweight model for image caption
    Zhang X.
    Fan M.
    Hou M.
    International Journal of Computers and Applications, 2024, 46 (06) : 418 - 426
  • [27] Enhanced Cross-Modal Transformer Model for Video Semantic Similarity Measurement
    Li, Da
    Zhu, Boqing
    Xu, Kele
    Yang, Sen
    Feng, Dawei
    Liu, Bo
    Wang, Huaimin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (01) : 475 - 479
  • [28] Image caption generation using transformer learning methods: a case study on instagram image
    Dittakan, Kwankamon
    Prompitak, Kamontorn
    Thungklang, Phutphisit
    Wongwattanakit, Chatchawan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (15) : 46397 - 46417
  • [29] Attention-based Visual-Audio Fusion for Video Caption Generation
    Guo, Ningning
    Liu, Huaping
    Jiang, Linhua
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS AND MECHATRONICS (ICARM 2019), 2019, : 839 - 844
  • [30] Layer-wise enhanced transformer with multi-modal fusion for image caption
    Li, Jingdan
    Wang, Yi
    Zhao, Dexin
    MULTIMEDIA SYSTEMS, 2023, 29 (03) : 1043 - 1056