Accent recognition is a significant area of research, whose importance has increased in recent years. Numerous studies have been carried out using various languages to improve the performance of accent recognition systems. However, the recognition of a language's regional accents is still a challenging problem. In this study, regional accents of British English were recognized for both gender-independent and gender-dependent experiments using a convolutional neural network. Many different acoustic features were used in the studies. While there is still no generally accepted feature set, the selection of handcrafted features is a challenging task. Moreover, converting audio signals into images in the most appropriate way is critical for a convolutional neural network, a deep learning model commonly used in image applications. To take advantage of the convolutional neural networks' ability to characterize two-dimensional signals, spectrogram image features that visualize the speech signal frequency distribution were used. For this purpose, sound signals were first segmented to their state before normalization. Each segment was combined by taking the fast Fourier transform. The absolute value was taken, and then, the log function was used to compress the dynamic range of these linear rate maps, resulting in log-power rate maps. After a grayscale image was formed by normalizing the obtained time-frequency matrix in the range of [0, 1], the dynamic range was quantified to red, green, and blue color values to generate a monochrome image. Thus, the feature extraction process, which is time-consuming and challenging, was simplified using spectrogram images and a convolutional neural network. In addition, although it is desired that the training and test data have a uniform distribution, the heterogeneity of the data adversely affects the performance of machine learning algorithms. To overcome this problem and improve the model's performance, transfer learning, a state-of-the-art technology that enables data transfer from the pre-trained AlexNet model with 1.3 million pictures on the ImageNet database, was utilized. Several performance metrics, such as accuracy, specificity, sensitivity, precision, and F-score, were used to evaluate the proposed approach. The accuracy of 92.92 and 93.38% and the F-score of 92.67 and 93.19% were obtained for gender-independent and gender-dependent experiments, respectively. Additionally, i-vector-based linear discriminant analysis and support vector machine methods were used in the study. Thus, the results obtained to evaluate the performance of the proposed recognition method are presented comparatively.