In the rapidly evolving field of AI, Explainable Artificial Intelligence (XAI) has become paramount, particularly in Intelligent Environments applications. It offers clarity and understanding in complex decision-making processes, fostering trust and enabling rigorous scrutiny. The Shapley value, renowned for its accurate quantification of feature importance, has emerged as a prevalent standard in both academic research and practical application. Nevertheless, the Shapley value's reliance on the calculation of all possible coalitions poses a significant computational challenge, as it falls within the class of NP-hard problems. Consequently, approximation techniques are employed in most practical scenarios as a substitute for precise computations. The most common of those is the SHAP (SHapley Additive exPlanations) technique, which quantifies the influence exerted by a specific feature on decision outcomes of a specific Machine Learning model. However, the Shapley value's theoretical underpinnings focus on assessing and understanding feature impact on model evaluation metrics, rather than just alterations in the responses. This paper conducts a comparative analysis using controlled synthetic data with established ground truths. It juxtaposes the practical implementation of the SHAP approach with the theoretical model in two distinct scenarios: one using the F1-score and the other, the accuracy metric. These are two representative characteristic functions, capturing different aspects and whose appropriateness depends on the specific requirements and context of the task to be solved. We analyze how the three alternatives exhibit similarity and disparity in their manifestation of feature effects. We explore the parallels and differences between these approaches in reflecting feature effects. Ultimately, our research seeks to determine the conditions under which SHAP outcomes are more aligned with either the F1-score or the accuracy metric, thereby providing valuable insights for their application in various Intelligent Environment contexts.