To maintain the boundaries of salient objects in the detection results, some methods start to apply additional edge labels to train their networks to learn detailed information. These methods have made gratifying progress. However, since the quality of the saliency maps will be affected by boundary features, it is still worth exploring how to extract effective boundary features and fuse them with semantic features. In this paper, we propose a novel Dual-branch Mutual Assistance Network (DMANet) to simultaneously detect salient objects and salient boundaries. To combine the respective advantages of the two task features, we merge the features of the two branches to generate complementary features and adopt the complementary features to refine semantic and boundary information. Through the interaction of the two branches, the semantic features can gradually use the boundary features to improve themselves, so that the predicted salient regions have clear boundaries. In addition, we design a novel Feature Multi-pathway Compression and Reconstruction (FMCR) module, and embed multiple such modules in the network. Compression means seeking a concise expression of the original features. Reconstruction is to discriminate the key information in the compressed features and further analyze it. By combining the analysis results of multiple pathways, the FMCR module can enhance the network's ability to identify salient objects based on the various saliency cues obtained. The experimental results on five data sets show that our method surpasses the 15 state-of-the-art methods with significantly improved performance.