Classification is one of the most well-known data mining branches used in diverse domains and fields. In the literature, many different classification techniques, such as statistical/intelligent, linear/nonlinear, fuzzy/crisp, shallow/deep, and single/hybrid, have been developed to cover data and systems with different characteristics. Intelligent classification approaches, especially deep learning classifiers, due to their unique features to provide accurate and efficient results, have recently attracted a lot of attention. However, in the learning process of the intelligent classifiers, a continuous distance-based cost function is used to estimate the connection weights, though the goal function in classification problems is discrete and using a continuous cost function in their learning process is unreasonable and inefficient. In this paper, a novel discrete learning-based methodology is proposed to estimate the connection weights of intelligent classifiers more accurately. In the proposed learning process, they are discretely adjusted and at once jumped to the target. This is in contrast to conventional continuous learning algorithms in which the connection weights are continuously adjusted and step by step near the target. In the present research, the proposed methodology is exemplarily applied to the deep neural network (DNN), which is one of the most recognized deep classification approaches, with a solid mathematical foundation and strong practical results in complex problems. Although the proposed methodology is just implemented on the DNN, it is a general methodology that can be similarly applied to other shallow and deep intelligent classification models. It can be generally demonstrated that the performance of the proposed discrete learning-based DNN (DIDNN) model, due to its consistency property, will not be worse than the conventional ones. The proposed DIDNN model is exemplarily evaluated on some well-known cancer classification benchmarks to illustrate the efficiency of the proposed model. The empirical results indicate that the proposed model outperforms the conventional versions of the selected deep approach in all data sets. Based on the performance analysis, the DIDNN model can improve the performance of the classic version by approximately 3.39%. Therefore, using this technique is an appropriate and effective alternative to conventional DNN-based models for classification purposes.