This study focuses on the Clothes-Changing Person Re-Identification (CC-ReID) problem, aiming to achieve precise recognition of the same pedestrian despite changes in attire. Despite some progress in this field, challenges persist in maintaining pedestrian identity consistency due to variations in clothing, leading to recognition disturbances. To address this, we propose a novel Multidimensional Semantic Disentanglement Network (MSD-Net). This network enhances the recognition capability for non-clothing areas by reducing reliance on clothing features and integrating discriminative and global features. Specifically, we employ semantic segmentation maps for pedestrian feature disentanglement, combined with RGB images, to effectively erase clothing features and consequently enhance focus on non-clothing areas. Additionally, we introduce a method to convert pedestrian semantic segmentation maps into dual-precision feature maps, utilizing a spatial attention mechanism to proactively learn distinctive pedestrian features, thereby further improving model performance. Extensive experiments on two standard CC-ReID datasets validate the effectiveness of our approach, outperforming existing state-of-the-art solutions. On the PRCC and VC-Clothes datasets, our model achieves Top-1 accuracies of 65.3% and 84.1%, respectively, in clothes-changing scenarios.