When deploying deep neural networks, quantification of a model's uncertainty is necessary to provide confidence in its predictions by distinguishing between accurate predictions and coincidentally correct guesses. While it is known that the accuracy of predictions is dependent on the data on which the model was trained, to date, limited work has examined the relationship between training data quantity and uncertainty quantification. In this paper, we propose two metrics to assess the 'quality' of uncertainty quantification, and investigate the relationship between training data quantity and Monte Carlo Dropout uncertainty quantification in supervised and semisupervised learning across various text-based datasets. We found that in supervised learning, uncertainty quantification quality (across both metrics) initially increased for larger quantities of training data, but interestingly, after a certain threshold, began to gradually decline. In semi-supervised learning, uncertainty quantification was enhanced by both a greater number of training samples and greater proportion of pre-labelled data. These results suggest that for supervised learning, data scientists generally ought not to invest resources into acquiring more training data solely for superior uncertainty quantification. However, if semi-supervised learning is necessary, then there is a marked benefit in obtaining more data.