In the last decade, several high resolution remote sensing benchmark datasets have been developed and publicly released. These datasets, while diverse in design, lack the required intra-class variation for high-performing, machine-assisted visual analytics. More specifically, the disparate datasets are suitable for small, closed system evaluation; however they are not well suited for training of computer vision models that are robust in real-world, non-closed environments encountered in true remote sensing applications. To that end, a benchmark meta-dataset (MDS) was developed to facilitate the training of models for machine-assisted visual analytics. Four existing benchmark datasets were combined to build the original MDS, which excelled for training models for both classification and broad area search applications. In this work, we evaluate an enhanced version of the MDS, MDSv2, by integrating co-occurring classes of two additional recently released, publicly available challenge datasets: xView and Functional Map of the World (FMoW). The MDSv2 has 33 classes with 87,470 total samples. We investigate the utility of three neural architecture search (NAS) deep learning architectures on the MDSv2 for both classification and machine-assisted visual analytics. The NAS models trained with the MDSv2 are able to achieve an average F1 of 98.01% and a powerful 0.934 scanning precision.