The increasing demand for spatiotemporal data and modeling tasks in geosciences has made geospatial code generation technology a critical factor in enhancing productivity. Although large language models (LLMs) have demonstrated potential in code generation tasks, they often encounter issues such as refusal to code or hallucination in geospatial code generation due to a lack of domain-specific knowledge and code corpora. To address these challenges, this paper presents and open-sources the GeoCode-PT and GeoCode-SFT corpora, along with the GeoCode-Eval evaluation dataset. Additionally, by leveraging QLoRA and LoRA for pretraining and fine-tuning, we introduce GeoCode-GPT-7B, the first LLM focused on geospatial code generation, fine-tuned from Code Llama-7B. Furthermore, we establish a comprehensive geospatial code evaluation framework, incorporating option matching, expert validation, and prompt engineering scoring for LLMs, and systematically evaluate GeoCode-GPT-7B using the GeoCode-Eval dataset. Experimental results reveal that GeoCode-GPT significantly outperforms existing models across multiple tasks. For multiple-choice tasks, its accuracy improves by 9.1% to 32.1%. In code summarization, it achieves superior scores in completeness, accuracy, and readability, with gains ranging from 1.7 to 25.4 points. For code generation, its performance in accuracy, readability, and executability surpasses benchmarks by 1.2 to 25.1 points. Grounded in the fine-tuning paradigm, this study introduces and validates an approach to enhance LLMs in geospatial code generation and associated tasks. These findings extend the application boundaries of such models in geospatial domains and offer a robust foundation for exploring their latent potential.