Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) are transforming research methodologies, including Systematic Literature Reviews (SLRs). While traditional, human-led SLRs are labor-intensive, AI-driven approaches promise efficiency and scalability. However, the reliability and accuracy of AI-generated literature reviews remain uncertain. This study investigates the performance of GPT-4-powered Consensus in conducting an SLR on Big Data research, comparing its results with a manually conducted SLR. To evaluate Consensus, we analyzed its ability to detect relevant studies, extract key insights, and synthesize findings. Our human-led SLR identified 32 primary studies (PSs) and 207 related works, whereas Consensus detected 22 PSs, with 16 overlapping with the manual selection and 5 false positives. The AI-selected studies had an average citation count of 202 per study, significantly higher than the 64.4 citations per study in the manual SLR, indicating a possible bias toward highly cited papers. However, none of the 32 PSs selected manually were included in the AI-generated results, highlighting recall and selection accuracy limitations. Key findings reveal that Consensus accelerates literature retrieval but suffers from hallucinations, reference inaccuracies, and limited critical analysis. Specifically, it failed to capture nuanced research challenges and missed important application domains. Precision, recall, and F1 scores of the AI-selected studies were 76.2%, 38.1%, and 50.6%, respectively, demonstrating that while AI retrieves relevant papers with high precision, it lacks comprehensiveness. To mitigate these limitations, we propose a hybrid AI-human SLR framework, where AI enhances search efficiency while human reviewers ensure rigor and validity. While AI can support literature reviews, human oversight remains essential for ensuring accuracy and depth. Future research should assess AI-assisted SLRs across multiple disciplines to validate generalizability and explore domain-specific LLMs for improved performance.