This research investigates the application of open- source transformers, specifically the ConvNeXt V2 and Segformer models, for brain tumor classification and segmentation in medical imaging. The ConvNeXt V2 model is adapted for classification tasks, while the Segformer model is tailored for segmentation tasks, both undergoing a fine-tuning process involving model initialization, label encoding, hyperparameter adjustment, and training. The ConvNeXt V2 model demonstrates exceptional performance in accurately classifying various types of brain tumors, achieving a remarkable accuracy of 99.60%. In comparison to other state-of-the-art models such as ConvNeXt V1, Swin, and ViT, ConvNeXt V2 consistently outperforms them, attaining superior accuracy rates across all metrics for each tumor type. Surprisingly, when there is no tumor present, it has predicted with 100% accuracy. In contrast, the Segformer model has excelled in accurately segmenting brain tumors, achieving a Dice score of up to 90% and a Hausdorff distance of 0.87mm. These results underscore the transformative potential of open-source transformers, exemplified by ConvNeXt V2 and Segformer models, in revolutionizing medical imaging practices. This study paves the way for further exploration of transformer applications in medical imaging and optimization of these models for enhanced performance, heralding a promising future for advanced diagnostic tools.