In machine learning, generation of new algorithms or, in most cases, minor amendment of the existing ones is a common task. In such cases, a rigorous and correct statistical analysis of the results of different algorithms is necessary in order to select the exact technique(s) depending on the problem to be solved. The main inconvenience related to this necessity is the absence of proper compilation of statistical techniques. In this paper, we propose the use of two important non-parametric statistical tests, namely, Wilcoxon signed rank test for comparison of two classifiers and Friedman test with the corresponding post-hoc tests for comparison of multiple classifiers over multiple datasets. We also introduce a new variant of non-parametric test known as Scheffe's test for locating unequal pairs of means of performances of multiple classifiers when the given datasets are of unequal sizes. The parametric tests, which were previously being used for comparing multiple classifiers, have also been described in brief. The proposed non-parametric tests have also been applied on the classification results on ten real-problem datasets taken from the UCI Machine Learning Database Repository (http://www.ics.uci.edu/mlearn) (Valdovinos and Sanchez, 2009) as case studies.