There are frequently a large number of strip objects in segmentation scenarios, and the use of conventional square convolution may yield redundant information. Based on our previously proposed SA-FFNet (Zhou et al. in Neurocomputing 453:50–59, 2021), we study the effect of strip sub-region information extraction on semantic segmentation and propose a network. Our method is conducive to extracting multi-scale strip objects that often appear in segmentation scenes, and using strip dilated convolution to further extract contextual dependencies in other directions. First, we propose a multi-scale strip pooling module that enables the backbone network to effectively obtain multi-scale contexts; Then, we introduce a strip dilated convolution module, which supplements the vertical contexts of the strip pooling by using strip dilated convolution; Finally, we construct a novel network integrating the proposed two modules. The method explicitly takes horizontal and vertical contexts of multi-scale strip objects into consideration, so that scene understanding could benefit from long-range dependencies. The experimental results on the widely used PASCAL VOC 2012 and Cityscapes scene analysis benchmark datasets, which are better than the existing OCRNet, DeeplabV3+, SPNet, etc, both qualitatively and quantitatively.