A novel method for image captioning using multimodal feature fusion employing mask RNN and LSTM models

被引:10
|
作者
Thangavel, Kumaravel [1 ]
Palanisamy, Natesan [2 ]
Muthusamy, Suresh [3 ]
Mishra, Om Prava [4 ]
Sundararajan, Suma Christal Mary [5 ]
Panchal, Hitesh [6 ]
Loganathan, Ashok Kumar [7 ]
Ramamoorthi, Ponarun [8 ]
机构
[1] Kongu Engn Coll Autonomous, Dept Comp Sci & Engn, Erode, Tamil Nadu, India
[2] Kongu Engn Coll Autonomous, Dept Comp Sci & Engn, Erode, Tamil Nadu, India
[3] Kongu Engn Coll Autonomous, Dept Elect & Commun Engn, Erode, Tamil Nadu, India
[4] Vel Tech Rangarajan Dr Sagunthala R&D Inst Sci & T, Dept Elect & Commun Engn, Chennai, Tamil Nadu, India
[5] Panimalar Engn Coll Autonomous, Dept Informat Technol, Chennai, Tamil Nadu, India
[6] Govt Engn Coll, Dept Mech Engn, Patan, Gujarat, India
[7] PSG Coll Technol, Dept Elect & Elect Engn, Coimbatore, Tamil Nadu, India
[8] Theni Kammavar Sangam Coll Technol, Dept Elect & Elect Engn, Theni, Tamil Nadu, India
关键词
Image captioning; Mask RCNN; LSTM; Multimodal feature fusion; Semantic feature analysis;
D O I
10.1007/s00500-023-08448-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image captioning is a technique that allows us to use computers to interpret the information in photographs and make written text. The use of deep learning to interpret image information and create descriptive text has become a widely researched issue since its establishment. Nevertheless, these strategies do not identify all samples that depict conceptual ideas. In reality, the vast majority of them seem to be irrelevant to the matching tasks. The degree of similarity is determined only by a few relevant semantic occurrences. This duplicate instance can also be thought of as noise, as it obstructs the matching process of a few meaningful instances and adds to the model's computational effort. In the existing scheme, traditional convolutional neural networks (CNN) are presented. For that reason, captioning is not effective due to its structure. Furthermore, present approaches frequently require the deliberate use of additional target recognition algorithms or costly human labeling when extracting information is required. For image captioning, this research presents a multimodal feature fusion-based deep learning model. The coding layer uses mask recurrent neural networks (Faster RCNN), the long short-term memory has been used to decode, and the descriptive text is constructed. In deep learning, the model parameters are optimized through the method of gradient optimization. In the decoding layer, dense attention mechanisms can assist in minimizing non-salient data interruption and preferentially input the appropriate data for the decryption stage. Input images are used to train a model that, when given the opportunity, will provide captions that are very close to accurately describing the images. Various datasets are used to evaluate the model's precision and the fluency, or mastery, of the language it acquires by analyzing picture descriptions. Results from these tests demonstrate that the model consistently provides correct descriptions of input images. This model has been taught to provide captions or words describing an input picture. To measure the effectiveness of the model, the system is given categorization scores. With a batch size of 512 and 100 training epochs, the suggested system shows a 95% increase in performance. The model's capacity to comprehend images and generate text is validated by the experimental data in the domain of generic images. This paper is implemented using Python frameworks and also evaluated using performance metrics such as PSNR, RMSE, SSIM, accuracy, recall, F1-score, and precision.
引用
收藏
页码:14205 / 14218
页数:14
相关论文
共 50 条
  • [1] A novel method for image captioning using multimodal feature fusion employing mask RNN and LSTM models
    Kumaravel Thangavel
    Natesan Palanisamy
    Suresh Muthusamy
    Om Prava Mishra
    Suma Christal Mary Sundararajan
    Hitesh Panchal
    Ashok Kumar Loganathan
    Ponarun Ramamoorthi
    Soft Computing, 2023, 27 : 14205 - 14218
  • [2] Image Captioning using Hybrid LSTM-RNN with Deep Features
    Kalpana Prasanna Deorukhkar
    Satish Ket
    Sensing and Imaging, 2022, 23
  • [3] Image Captioning using Hybrid LSTM-RNN with Deep Features
    Deorukhkar, Kalpana Prasanna
    Ket, Satish
    SENSING AND IMAGING, 2022, 23 (01):
  • [4] Automatic Arabic Image Captioning using RNN-LSTM-Based Language Model and CNN
    Al-Muzaini, Huda A.
    Al-Yahya, Tasniem N.
    Benhidour, Hafida
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2018, 9 (06) : 67 - 73
  • [5] Sieve: Multimodal Dataset Pruning Using Image Captioning Models
    Mahmouc, Anas
    Elhoushi, Mostafa
    Abbass, Amro
    Yang, Yu
    Ardalani, Newsha
    Leather, Hugh
    Morcos, Art S.
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 22423 - 22432
  • [6] Multimodal Data Augmentation for Image Captioning using Diffusion Models
    Xiao, Changrong
    Xu, Sean Xin
    Zhang, Kunpeng
    PROCEEDINGS OF THE 1ST WORKSHOP ON LARGE GENERATIVE MODELS MEET MULTIMODAL APPLICATIONS, LGM3A 2023, 2023, : 23 - 33
  • [7] Feature extraction of multimodal medical image fusion using novel deep learning and contrast enhancement method
    Bhutto, Jameel Ahmed
    Jiang, Guosong
    Rahman, Ziaur
    Ishfaq, Muhammad
    Sun, Zhengzheng
    Soomro, Toufique Ahmed
    APPLIED INTELLIGENCE, 2024, 54 (07) : 5907 - 5930
  • [8] A novel method of multimodal medical image fusion using fuzzy transform
    Manchanda, Meenu
    Sharma, Rajiv
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2016, 40 : 197 - 217
  • [9] A NOVEL METHOD FOR STEREO MATCHING USING GABOR FEATURE IMAGE AND CONFIDENCE MASK
    Liu, Haixu
    Liu, Yang
    OuYang, Shuxin
    Liu, Chenyu
    Li, Xueming
    2013 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (IEEE VCIP 2013), 2013,
  • [10] Novel concept-based image captioning models using LSTM and multi-encoder transformer architecture
    Osman, Asmaa A. E.
    Shalaby, Mohamed A. Wahby
    Soliman, Mona M.
    Elsayed, Khaled M.
    SCIENTIFIC REPORTS, 2024, 14 (01):