Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as meta-examples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. In a recent paper, active meta-learning has been used to address this problem. An uncertainty sampling method for the k-NN algorithm using a least confidence score based on a distance measure was employed. Here we extend that work, namely by investigating three hypotheses: 1) is there advantage in using a frequency-based least confidence score over the distance-based score? 2) given that the meta-learning problem used has three classes, is it better to use a margin-based score? and 3) given that datasetoids are expected to contain some noise, are better results achieved by starting the search with all datasets already labeled? Some of the results obtained are unexpected and should be further analyzed. However, they confirm that active meta-learning can significantly reduce the computational cost of meta-learning with potential gains in accuracy.