Due to the relative complementarity between optical images and Synthetic Aperture Radar (SAR) images, the method of SAR-optical matching is widely used in auxiliary navigation, disaster monitoring, rescue and other fields. However, there are huge geometric and radiometric differences between SAR and optical images, which pose serious challenges for multimodal image matching. To solve this problem, this paper proposes a Refined Subdivision Processing Network (RSPNet) for SAR-optical matching. Firstly, to extract representative features of images, we propose to employ the pseudo-Siamese network structure with dual-branch partial weight sharing in RSPNet. Then, to preserve the detailed information in the image, the method of subdividing the features to generate part-level features of the image is proposed. Finally, to remove modality-specific but task-independent information, part-level features are refined using information bottleneck methods. Experiments show that our proposed method has an excellent performance in the Scene-level matching between optical and SAR images.