Nowadays, high-resolution remote sensing images provide rich data sources and deep learning models show powerful feature representation capability for remote sensing object detection. However, due to the complex object structure as well as the changeable rotation angle, how to efficiently estimate the oriented bounding box regarding the accurate location of objects is still an open issue. Focused on this, a new one-stage structure-adaptive oriented object detection (SOOD) network is proposed, in this article. First, we designed a new rotation angle encoder (RAE), where an angle coordinate system is adopted and periodic angle correction is conducted. Different from the traditional longe-edge definition for angle estimation, the RAE can mitigate boundary discontinuity and square-like problems. Then, structure-adaptive label assignment (SALA) and confidence estimation (SACE) are introduced, to locate the position of objects more accurately. On the one hand, the anchor box determines the label assignment according to the affiliation relationship between the center point and the object's inner ellipse boundary. By constraining the ellipse boundary and employing non-parametric label assignment, high-quality anchor boxes are initially selected, and low-quality anchor boxes are suppressed. On the other hand, the integration of intersection over union (IoU) prediction and uncertainty prediction constructs a quality evaluation function to guide. In this manner, this function dynamically evaluates the localization and classification ability of each prediction box. Extensive experiments on publicly available datasets such as DOTA1.0, DOTA1.5, DIOR, and MAR20 demonstrate the effectiveness of the proposed model. The source code will be available at https://github.com/fan609/SOOD.