Over the years, many solutions have been suggested in order to improve object detection in maritime environments. However, none of these approaches uses flight information, such as altitude, camera angle, time of the day, and atmospheric conditions, to improve detection accuracy and network robustness, even though this information is often available and captured by the UAV. This work aims to develop a network unaffected by image-capturing conditions, such as altitude and angle. To achieve this, metadata was integrated into the neural network, and an adversarial learning training approach was employed. This was built on top of the YOLOv7, which is a state-of-the-art realtime object detector. To evaluate the effectiveness of this methodology, comprehensive experiments and analyses were conducted. Findings reveal that the improvements achieved by this approach are minimal when trying to create networks that generalize more across these specific domains. The YOLOv7 mosaic augmentation was identified as one potential responsible for this minimal impact because it also enhances the model's ability to become invariant to these image-capturing conditions. Another potential cause is the fact that the domains considered (altitude and angle) are not orthogonal with respect to their impact on captured images. Further experiments should be conducted using datasets that offer more diverse metadata, such as adverse weather and sea conditions, which may be more representative of real maritime surveillance conditions. The source code of this work is publicly available at https://git hub.com/ipleiria-robotics/maritime-metadata-adaptation.