Self-Supervised Pre-Training Joint Framework: Assisting Lightweight Detection Network for Underwater Object Detection

被引:11
|
作者
Wang, Zhuo [1 ]
Chen, Haojie [1 ]
Qin, Hongde [1 ]
Chen, Qin [1 ]
机构
[1] Harbin Engn Univ, Coll Shipbuilding Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
underwater object detection; self-supervised learning; lightweight detection network; deep learning; IMAGE-ENHANCEMENT;
D O I
10.3390/jmse11030604
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
In the computer vision field, underwater object detection has been a challenging task. Due to the attenuation of light in a medium and the scattering of light by suspended particles in water, underwater optical images often face the problems of color distortion and target feature blurring, which greatly affect the detection accuracy of underwater object detection. Although deep learning-based algorithms have achieved state-of-the-art results in the field of object detection, most of them cannot be applied to practice because of the limited computing capacity of a low-power processor embedded in unmanned underwater vehicles. This paper proposes a lightweight underwater object detection network based on the YOLOX model called LUO-YOLOX. A novel weighted ghost-CSPDarknet and simplified PANet were used in LUO-YOLOX to reduce the parameters of the whole model. Moreover, aiming to solve the problems of color distortion and unclear features of targets in underwater images, this paper proposes an efficient self-supervised pre-training joint framework based on underwater auto-encoder transformation (UAET). After the end-to-end pre-training process with the self-supervised pre-training joint framework, the backbone of the object detection network can extract more essential and robust features from degradation images when retrained on underwater datasets. Numerous experiments on the URPC2021 and detecting underwater objects (DUO) datasets verify the performance of our proposed method. Our work can assist unmanned underwater vehicles to perform underwater object detection tasks more accurately.
引用
收藏
页数:18
相关论文
共 50 条
  • [41] A Unified Visual Information Preservation Framework for Self-supervised Pre-Training in Medical Image Analysis
    Zhou, Hong-Yu
    Lu, Chixiang
    Chen, Chaoqi
    Yang, Sibei
    Yu, Yizhou
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 8020 - 8035
  • [42] A LIGHTWEIGHT SELF-SUPERVISED TRAINING FRAMEWORK FOR MONOCULAR DEPTH ESTIMATION
    Heydrich, Tim
    Yang, Yimin
    Du, Shan
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2265 - 2269
  • [43] Token Boosting for Robust Self-Supervised Visual Transformer Pre-training
    Li, Tianjiao
    Foo, Lin Geng
    Hu, Ping
    Shang, Xindi
    Rahmani, Hossein
    Yuan, Zehuan
    Liu, Jun
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24027 - 24038
  • [44] Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures
    Guo, Yuzhi
    Wu, Jiaxiang
    Ma, Hehuan
    Huang, Junzhou
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6801 - 6809
  • [45] Stabilizing Label Assignment for Speech Separation by Self-supervised Pre-training
    Huang, Sung-Feng
    Chuang, Shun-Po
    Liu, Da-Rong
    Chen, Yi-Chen
    Yang, Gene-Ping
    Lee, Hung-yi
    [J]. INTERSPEECH 2021, 2021, : 3056 - 3060
  • [46] DialogueBERT: A Self-Supervised Learning based Dialogue Pre-training Encoder
    Zhang, Zhenyu
    Guo, Tao
    Chen, Meng
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3647 - 3651
  • [47] Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training
    Islam, Tanvir
    Washington, Peter
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (21):
  • [48] Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds
    Hess, Georg
    Jaxing, Johan
    Svensson, Elias
    Hagerman, David
    Petersson, Christoffer
    Svensson, Lennart
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS (WACVW), 2023, : 350 - 359
  • [49] Feature-Suppressed Contrast for Self-Supervised Food Pre-training
    Liu, Xinda
    Zhu, Yaohui
    Liu, Linhu
    Tian, Jiang
    Wang, Lili
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4359 - 4367
  • [50] Self-supervised Heterogeneous Graph Pre-training Based on Structural Clustering
    Yang, Yaming
    Guan, Ziyu
    Wang, Zhe
    Zhao, Wei
    Xu, Cai
    Lu, Weigang
    Huang, Jianbin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,