Synthetic Aperture Radar (SAR) object detection is one of the key measures to ensure maritime traffic and safety. However, SAR images contain a large amount of speckle noise, which poses a challenge to traditional deep learning methods for feature extraction and processing. Therefore, we propose a YOLO-based feature disentanglement and interaction network for SAR object detection (FDI-YOLO). First, FDI-YOLO proposes a reversible cross stage partial network (RCSPNet) as the backbone. The RCSPNet uses reversible transformations to retain more complete feature information for feature extraction and decompose it into feature maps of different dimensions. Then, we propose a structure with cross-scale depth feature interaction (CDFI), which captures the local texture and global semantic information of the in-scale features using crossover frequency semantic perception (CFSP), and then strengthens the linking of the cross-scale features through bidirectional information interaction. Finally, we use an adaptive object detection head and a bounding box regression loss with a dynamic focusing mechanism to further improve the detection capability of FDI-YOLO for SAR images. We conducted experiments on three publicly available SAR datasets, SSDD, ISDD, and HRSID. On these datasets, we achieve F1 scores of 98.1%/88.6%/88.5%, AP50 scores of 98.7%/90.3%/90.9%, and AP50-95 scores of 71.0%/42.4%/64.3%, respectively. The experimental results show that FDI-YOLO is able to perform the task of SAR object detection well with less computational resources.