Predictive analytics technologies like machine learning, AI and Generative AI models like Large Language Models (LLMs), have garnered enthusiasm for their potential to improve healthcare services in smart cities. However, these rapidly developing intelligent agents that guides the healthcare decisions may also risk exacerbating health inequities along racial, ethnic, gender, and socioeconomic lines, reflecting systemic discrimination ingrained within healthcare practices. Flawed or injudiciously applied AI systems could improperly restrict opportunities and provide substandard care for minority groups by propagating historical patterns of prejudice encoded within limited training datasets. These advanced intelligent technologies can hinder the sustainable health solutions for smart cities. This study examines intelligent AI models and applications in healthcare settings, with a focus on assessing impacts on marginalized and disadvantaged populations. Comprehensive scholarly database searches identified 45 relevant studies investigating issues on algorithmic bias, lack of diverse training data, and discrimination risks linked to healthcare AI systems. The review finds most applications still lack adequate safeguards to prevent discrimination against vulnerable populations. Through the review of these systems, we propose an integrated inclusive smart health model that considers both technical interventions as well as broader participatory and ethical approaches. Realizing AI's fullest potential to meaningfully advance health justice requires not only algorithmic adjustments to mitigate bias, and efforts to improve diversity of training data, transparent analysis frameworks, and best practices for ensuring just AI systems in healthcare, but also a human-centered commitment to thoughtful, inclusive development approaches that center the needs and priorities of communities impacted by health disparities from the outset.