Background: The application of machine learning and deep learning techniques in medical imaging encounters a significant limitation due to the limited availability of high-quality medical imaging data. The reluctance to share patient information for research purposes poses a substantial challenge in this regard. While traditional approaches like data augmentation and geographical alterations have been employed to address this issue, Generative Adversarial Networks (GANs) offer a promising solution for generating realistic synthetic medical images. GANs can create believable images from original, unlabeled photographs by learning the underlying data distribution from training examples, mitigating the risk of overfitting on small datasets. These generated images exhibit high realism, even though they may differ from the originals while still conforming to the same data distribution. Method: In this study, we utilize a Generative Adversarial Network (GAN) framework that has undergone training using the BraTS 2020 dataset, specifically focused on the segmentation of brain tumors. The primary objective is to create synthetic MRI scans of brain tumors and their corresponding masks. Subsequently, we evaluate the influence of dataset augmentation by integrating these synthetic images into a segmentation network based on the U-Net architecture. Result: The results of our investigation demonstrate a noteworthy improvement in segmentation performance when utilizing the augmented dataset. Specifically, the testing accuracy increased from 0.90 for the original dataset to 0.94 for the augmented one. Conclusion: Our study underscores the potential of GANs in creating visually authentic medical images, as well as their capacity to enhance the performance of segmentation networks. This research addresses the critical need for more extensive and diverse medical imaging data in healthcare analysis, ultimately advancing the medical image analysis and diagnosis field.