The widespread deployment of dual-camera systems has laid a solid foundation for practical applications of infrared (IR)-RGB cross-modality person reidentification (ReID). However, the inherent modality differences between RGB and IR images cause significant intra-class variances in the feature space for individuals of the same identity. Current methods typically employ various network architectures for the image style transfer or extracting modality-invariant features, yet they overlook the information extraction from the most fundamental spectral semantic features. Based on the existing approaches, we propose a multi-spectral semantic alignment (MSSA) architecture aimed at aligning fine-grained spectral semantic features across both intra-modality and inter-modality perspectives. Through modality center semantic alignment (MCSA) learning, we comprehensively mitigate differences in identity features of different modalities. Moreover, to attenuate the discriminative information unique to a single modality, we introduce the modality reliability intensification (MRI) loss to enhance the reliability of identity information. Finally, to tackle the challenge that inter-modality intra-class disparities surpass inter-modality inter-class differences, we leverage the dynamic discriminative center (DDC) loss to further bolster the discriminability of reliable information. Through an extensive experiments conducted on SYSU-MM01, RegDB, and LLCM datasets, we demonstrate the substantial advantages of the proposed MSSA over other state-of-the-art methods.