This study explores the fine-tuning of six versions of the Whisper model for speech translation from Moroccan Darija to Modern Standard Arabic (MSA). We focus on fine-tuning Whisper across six different versions: small, medium, large, large-v2, large-v3, and turbo. Our primary goal is to evaluate how these model variants perform in translating Darija speech into accurate and coherent MSA text, shedding light on the trade-offs between model capacity and translation quality. The experiments are conducted on the Darija-C Corpus, a specialized dataset designed to capture the linguistic nuances of Darija and its relationship with MSA. We analyze factors such as computational efficiency, memory usage, and training time to offer a clear view of model deployment in resource-constrained environments. This study provides valuable insights for developing robust Darija-to-MSA speech translation systems and highlights the broader potential of fine-tuning Whisper for low-resource language pairs.