The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in which these methods are applied Solon and Selbst (Calif L REV 104: 671-732, 2016) and Pedreschi et al. (2007). The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment Office of Probation and Pretrial Services (2011) or decision-making process when designing public policies Lange (2015). The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems in artificial intelligence by analyzing available academic and scientific literature up to 2020. To achieve this goal, we have gathered available materials at the Web of Science and Scopus from last 5 years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.