FEW-SHOT SAR SHIP IMAGE DETECTION USING TWO-STAGE CROSS-DOMAIN TRANSFER LEARNING

被引:2
|
作者
Wang, Xu [1 ]
Zhou, Huaji [1 ,2 ]
Chen, Zheng [3 ]
Bai, Jing [1 ]
Ren, Junjie [1 ]
Shi, Jiao [4 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[2] Sci & Technol Commun Informat Secur Control Lab, Jiaxing 314033, Peoples R China
[3] China Mobile Tietong Corp, Shanxi Branch, Taiyuan 030001, Peoples R China
[4] Northwestern Polytech Univ, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
SAR ship detection; transfer learning; few-shot learning;
D O I
10.1109/IGARSS46834.2022.9883172
中图分类号
P [天文学、地球科学];
学科分类号
07 ;
摘要
Synthetic Aperture Radar is superior to optical sensors in that it can identify ships at all hours and on all days. Deep learning-based object detection relies on huge amounts of data, yet SAR ship images are challenging to obtain and label. A few-shot cross-domain transfer learning approach for SAR image ship detection is used in this paper. It is divided into two stages: the first uses a large volume of optical remote sensing ship images as the source domain training detection framework, and the second employs SAR ship images and optical remote sensing ship images to create a few-shot balanced subset fine-tuning detection framework. Use a metric learning-based prediction box classifier instead of a fully connected prediction box classifier. When fine-tuning the whole detection frame using the metric learning-based prediction frame classifier, the experiments show that an AP50 of 55.99% can be reached with only 10 SAR ship images.
引用
收藏
页码:2195 / 2198
页数:4
相关论文
共 50 条
  • [1] SAR Image Classification Using Few-shot Cross-domain Transfer Learning
    Rostami, Mohammad
    Kolouri, Soheil
    Eaton, Eric
    Kim, Kyungnam
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 907 - 915
  • [2] Experiments in cross-domain few-shot learning for image classification
    Wang, Hongyu
    Gouk, Henry
    Fraser, Huon
    Frank, Eibe
    Pfahringer, Bernhard
    Mayo, Michael
    Holmes, Geoffrey
    [J]. JOURNAL OF THE ROYAL SOCIETY OF NEW ZEALAND, 2023, 53 (01) : 169 - 191
  • [3] Deep Cross-Domain Few-Shot Learning for Hyperspectral Image Classification
    Li, Zhaokui
    Liu, Ming
    Chen, Yushi
    Xu, Yimin
    Li, Wei
    Du, Qian
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [4] Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [5] A Two-Stage Approach to Few-Shot Learning for Image Recognition
    Das, Debasmit
    Lee, C. S. George
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3336 - 3350
  • [6] Cross-domain transfer learning algorithm for few-shot ship recognition in remote-sensing images
    Chen, Huajie
    Lyu, Danni
    Zhou, Xiao
    Liu, Jun
    [J]. National Remote Sensing Bulletin, 2024, 28 (03) : 793 - 804
  • [7] Causal Meta-Transfer Learning for Cross-Domain Few-Shot Hyperspectral Image Classification
    Cheng, Yuhu
    Zhang, Wei
    Wang, Haoyu
    Wang, Xuesong
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [8] Knowledge transduction for cross-domain few-shot learning
    Li, Pengfang
    Liu, Fang
    Jiao, Licheng
    Li, Shuo
    Li, Lingling
    Liu, Xu
    Huang, Xinyan
    [J]. PATTERN RECOGNITION, 2023, 141
  • [9] DUAL GRAPH CROSS-DOMAIN FEW-SHOT LEARNING FOR HYPERSPECTRAL IMAGE CLASSIFICATION
    Zhang, Yuxiang
    Li, Wei
    Zhang, Mengmeng
    Tao, Ran
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3573 - 3577
  • [10] Experiments in Cross-domain Few-shot Learning for Image Classification: Extended Abstract
    Wang, Hongyu
    Fraser, Huon
    Gouk, Henry
    Frank, Eibe
    Pfahringer, Bernhard
    Mayo, Michael
    Holmes, Geoff
    [J]. ECMLPKDD WORKSHOP ON META-KNOWLEDGE TRANSFER, VOL 191, 2022, 191 : 81 - 83