Current mainstream medical image segmentation models have limitations in local feature representation, multi-scale feature integration, and channel feature selection. To address these issues, this paper proposes a segmentation model based on the Multi-Scale Hybrid Vision Network (MSHV-Net), which aims to enhance the accuracy and efficiency of skin image segmentation. The proposed approach includes three key modules: The Rapid Feature Module (RFM), which integrates partial convolution (PConv) and pointwise convolution (PWConv) to improve fine-grained local feature extraction; The Multi-Scale Wavelet Module (MSWM), which uses wavelet transforms to decompose multi-scale features, expanding receptive fields and capturing global contextual information; and the Adaptive Channel Focus (ACF) module, which employs adaptive attention mechanisms for optimized channel feature selection, amplifying essential features while suppressing redundancies. Experimental results show that MSHV-Net achieves a mean Intersection over Union (mIoU) of 79.15% and a Dice coefficient of 88.36% on the ISIC 2017 dataset, with improved performance on the ISIC 2018 dataset (80.43% mIoU, 89.16% Dice). The model also demonstrates notable improvements in specificity and sensitivity. Ablation studies validate the independent contributions of each module. On the Waterloo Skin Cancer Segmentation Dataset, MSHV-Net reaches 93.77% sensitivity and 99.49% specificity, and it achieves 78.61% and 88.95% mIoU on the MoNuSeg and GlaS datasets, respectively, showcasing strong generalization capabilities. MSHV-Net efficiently segments complex skin lesions, offering significant improvements in both segmentation accuracy and computational efficiency. The model operates with 0.07M parameters, 0.069 GFLOPs, 16.7 ms inference time, and 1.31 MB memory usage, demonstrating its computational efficiency and resource optimization. The implementation is available at: https://github.com/1502GaoYi/MSHV-Net.