Under low-light conditions, details and edges in images are often difficult to discern. Semantic information of an image is related to the human understanding of the image's content. In low-light image enhancement (LLIE), it helps to recognize different objects, scenes and edges in images. Specifically, it can serve as prior knowledge to guide LLIE methods. However, existing semantic-guided LLIE methods still have shortcomings, such as semantic incoherence and insufficient target perception. To address those issues, a semantic-guided low-light image enhancement network (SGRNet) is proposed to improve the role of semantic priors in the enhancement process. Based on Retinex, low-light images are decomposed into illumination and reflectance with the aid of semantic maps. The semantic perception module, integrating semantic and structural information into images, can stabilize image structure and illumination distribution. The heterogeneous affinity module, incorporating high- resolution intermediate features of different scales into the enhancement net, can reduce the loss of image details during enhancement. Additionally, a self-calibration attention module is designed to decompose the reflectance, leveraging its cross-channel interaction capabilities to maintain color consistency. Extensive experiments on seven real datasets demonstrate the superiority of this method in preserving illumination distribution, details, and color consistency in enhanced images.