Debugging is a crucial skill for programmers, yet it can be challenging for novices to learn. The introduction of large language models (LLMs) has opened up new possibilities for providing personalized debugging support to students. However, concerns have been raised about potential student over-reliance on LLM-based tools. This mixed-methods study investigates how a pedagogically-designed LLM-based chatbot supports students' debugging efforts in an introductory programming course. We conducted interviews and debugging think-aloud tasks with 20 students at three points throughout the semester. We specifically focused on characterizing when students initiate help from the chatbot during debugging, how they engage with the chatbot's responses, and how they describe their learning experiences with the chatbot. By analyzing data from the debugging tasks, we identified varying help-seeking behaviors and levels of engagement with the chatbot's responses, depending on students' familiarity with the suggested strategies. Interviews revealed that students appreciated the content and experiential knowledge provided by the chatbot, but did not view it as a primary source for learning debugging strategies. Additionally, students self-identified certain chatbot usage behaviors as negative, "non-ideal" engagement and others as positive, "learning-oriented" usage. Based on our findings, we discuss pedagogical implications and future directions for designing pedagogical chatbots to support debugging.