Breast UltraSound (BUS) image segmentation is crucial for the diagnosis and analysis of breast cancer. However, most existing methods for BUS tend to overlook the vital edge information. Meanwhile, noise, similar intensity distribution, varying tumor shape and size, will lead to severe missed detection and false detection. To address these issues, we propose a Reverse Region-Aware Network with Edge Difference, called RRANet, which learns edge information and region information from low-level features and high-level features, respectively. Specifically, we first design Edge Difference Convolution (EDC) to fully mine edge information. EDC aggregates intensity and gradient information to obtain edge details of low-level features in both horizontal and vertical directions. Next, we propose a Multi-Scale Adaptive Module (MSAM) that can effectively extract global information from high-level features. MSAM encodes features in the spatial dimension, which expands the receptive field and captures more local contextual information. In addition, we develop the Reverse Region-Aware Module (RRAM) to gradually refine the global information. This module can establish the relationship between region and edge cues, while correcting some erroneous predictions. Finally, the edge information and global information are fused to improve the prediction accuracy of BUS images. Extensive experiments on three challenging public BUS datasets show that our model outperforms several state-of-the-art medical image segmentation methods on benchmark datasets.