Prediction performance and optimization attributes of the transparent open-box learning networks (TOB) applied to a large database of US coals are further explored to complement recently published base-case analysis. Nine sensitivity cases configured with the 6339 data records allocated in different ways to the TOB’s tuning, training, and testing subsets are developed. These cases demonstrate for this data set that the TOB algorithm provides robust, reliable, and repeatable predictions provided that the tuning subset contains 40 or so data records. On the other hand, increasing the number of records in the testing subset to numbers much greater than 100 does not lead to improved prediction accuracy. A comparison of the prediction performance of three optimizers applied with the TOB for each of the sensitivity cases reveals that a memetic firefly optimizer matches the optimized solutions found by Excel’s GRG Solver optimizer. The functionality of the memetic firefly optimizer enables it to be used effectively in a fully coded version of the TOB optimizer (not involving Excel cell formula for the optimizer to operate). This is an advantage when evaluating larger data sets with larger tuning subset requirements. The memetic firefly optimizer also introduces further transparency, flexibility, and control to the TOB optimization process by facilitating metaheuristic profiling that aids the tuning and customization of the optimizer for deployment with a range of data sets involving complex, non-linear feasible-solution spaces.