In catastrophic occurrences, the automatic, accurate, and fast object detection of images taken by unmanned aerial vehicles (UAVs) can significantly reduce the time required for manual search and rescue. Victim detection methods, however, are not sufficiently robust for identifying partially obscured and multiscale objects against diverse backgrounds. To overcome this problem, we propose a hybrid-domain attention mechanism algorithm based on YOLOv5 with multiscale feature reuse (YOLO-MSFR). First, to solve the issue of the victim target being easily masked by a complex background, which complicates target attribute representation, a channel-space domain attention method was built, to improve target feature expression ability. Second, to address the problem of easily missed multiscale characteristics of victim targets, a multiscale feature reuse MSFR module was designed to ensure that large-scale target features are effectively expressed while enhancing small-target feature expression ability. The MSFR module was designed based on dilated convolution to solve the problem of small-target feature information loss during downsampling in the backbone network. The features were reused by cascade residuals to reduce the model training parameters and avoid the disappearance of the network deepening gradient. Finally, the efficient intersection over union (EIOU) loss function was adopted to accelerate network convergence and improve the detection performance of the network to accurately locate victims. The proposed algorithm was compared with five classical target detection algorithms on data from multiple disaster environments to verify its advantages. The experimental results show that the proposed algorithm can accurately detect multiscale victim targets in complex natural disasters, with mAP reaching 91.0%. The image detection speed at 640 x 640 resolution was 42 fps, indicating good real-time performance.