Image caption generation is among the most rapidly growing research areas that combine image processing methodologies with natural language processing (NLP) technique(s). The effectiveness of the combination of image processing and NLP techniques can revolutionaries the areas of content creation, media analysis, and accessibility. The study proposed a novel model to generate automatic image captions by consuming visual and linguistic features. Visual image features are extracted by applying Convolutional Neural Network and linguistic features by Long Short-Term Memory (LSTM) to generate text. Microsoft Common Objects in Context dataset with over 330,000 images having corresponding captions is used to train the proposed model. A comprehensive evaluation of various models, including VGGNet + LSTM, ResNet + LSTM, GoogleNet + LSTM, VGGNet + RNN, AlexNet + RNN, and AlexNet + LSTM, was conducted based on different batch sizes and learning rates. The assessment was performed using metrics such as BLEU-2 Score, METEOR Score, ROUGE-L Score, and CIDEr. The proposed method demonstrated competitive performance, suggesting its potential for further exploration and refinement. These findings underscore the importance of careful parameter tuning and model selection in image captioning tasks.