This study investigates the efficacy of the Heuristic Evaluation Method with a mix of expert and non-expert participants in assessing AI suggestion features in web systems. The methodology comprised three stages: an initial Heuristic Evaluation employing the 18 Guidelines for Human-AI Interaction, a Participant Survey to gauge perceptions using a demographic question, nine Likert statements, and two openended questions, and finally, Analysis and Triangulation to interpret and integrate the findings. Significant differences emerged between expert and non-expert perspectives. Non-experts identified more violations, predominantly of a less severe nature, compared to the more balanced severity spread among experts. Both groups focused on similar areas of violation but with different proportions of violations number, indicating a nuanced understanding of functionality by experts. Non-experts reported greater personal growth, though they valued their contributions less. The study underscored the consolidation process's importance in heuristic evaluations, reducing the total number of identified violations and refining the problem list. This research indicates that heuristic evaluation as a tool can be used for early usability assessment of AI features in web systems, supporting the utility of the Guidelines for Human-AI Interaction in scenarios as the described in this paper. The approach proved effective, especially in scenarios where direct user testing is impractical. The diverse participant profiles enriched the evaluation, with non-experts bringing unique insights, albeit facing challenges indicating a need for enhanced training and support. The study contributes to the field by confirming the value of incorporating a variety of perspectives in heuristic evaluations for AI-enhanced web functionalities.