Medical experts require an efficient tool that provides highly accurate diagnoses of patients for early and precise detection of the severity of brain tumours using brain magnetic resonance imaging (MRI). We propose a deep learning-based transfer learning technique that uses filtering methods on the test dataset to improve accuracy and performance efficiency. In this paper, we propose a morphological approach based on You Only Look Once, i.e., the YOLOv5 automated technique, to achieve accurate brain tumour findings. We also compare the proposed method in this manuscript to a number of well-known deep learning-based object detection frameworks and algorithms, such as AlexNet, ResNet-50, GoogleNet, MobileNet, VGG-16, YOLOv3 Pytorch, YOLOv4 Darknet, and YOLOv4-Tiny, and discover that the YOLOv5 model performs the best among them all. The RSNA-ASNR-MICCAI Brain Tumour Segmentation (BraTS21) Challenge 2021 dataset is used in this study to train the various models using a transfer learning methodology. Following thorough analysis, we discovered that the YOLOv5 model outperforms all other models taken into consideration with a mAP@ 0.5 score of 94.7%. With an MRI test dataset that had been morphologically filtered, it also improved to a mAP@ 0.5 score of 97.2%.