Ship detection oriented to compressive sensing measurements of space optical remote sensing scenes

被引:0
|
作者
Xiao S. [1 ,2 ]
Zhang Y. [1 ,2 ]
Chang X. [1 ,2 ]
Sun J. [1 ,2 ]
机构
[1] Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun
[2] University of Chinese Academy of Sciences, Beijing
关键词
compressive sensing; deep learning; joint training optimization; ship detection oriented to compressive sensing measurements;
D O I
10.37188/OPE.20233104.0517
中图分类号
学科分类号
摘要
The compressive sensing(CS)-based space optical remote-sensing(SORS)imaging system can simultaneously perform sampling and compression by using hardware at the sensing stage. The system must reconstruct the original scene during the ship detection task. The scene reconstruction process of CS is computationally expensive,memory intensive,and time-consuming. This paper proposes an algorithm named compressive sensing and improved you only look once(CS-IM-YOLO)for direct ship detection based on measurements obtained by the imaging system. To simulate the block compression sampling process of the imaging system,the convolution measurement layer with the same stride and convolution kernel size is used to perform the convolution operation on the scene,and the high-dimensional image signal is projected into the low-dimensional space to obtain the full-image CS measurements. After obtaining the measurements of the scene,the proposed ship detection network extracts the coordinates of the ship from the measurements. The squeeze-and-excitation Network(SENet)module is imported into the backbone network,and the improved backbone network is used to extract the ship feature information using the measurements. The feature pyramid network is used to enhance feature extraction while fusing the feature information of the shallow,middle,and deep layers,and then to complete predicting the ship's coordinates. CS-IM-YOLO especially connects the convolutional measurement layer and the CS based ship detection network for end-to-end training;this considerably simplifies the preprocessing process. We present an evaluation of the performance of the algorithm by using the HRSC2016 dataset. The experimental results show that the precision of CS-IM-YOLO for detection of ships via CS measurements in SORS scenes is 91. 60%,the recall is 87. 59%,the F1 value is 0. 90,and the AP value is 94. 13%. This demonstrates that the algorithm can perform accurate ship detection using the CS measurements of SORS scenes. © 2023 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:517 / 532
页数:15
相关论文
共 33 条
  • [1] CAO D SH, SHI ZH, LIN G Y., Development of airborne ocean modified Dyson hyperspectral imager[J], Opt. Precision Eng, 25, 6, pp. 1403-1409, (2017)
  • [2] ZHAO Y F, LI Y J,, ZHANG Y, Et al., A space-based infrared detection scene generation system for moving objects with sea background[J], Opt. Precision Eng, 25, 4, pp. 487-493, (2017)
  • [3] JIANG X, CHEN W X, NIE H T,, Et al., Real-time ship target detection based on aerial remote sensing images[J], Opt. Precision Eng, 28, 10, pp. 2360-2369, (2020)
  • [4] WANG H L,, ZHU M,, LIN CH B, Et al., Ship detection of complex sea background in optical remote sensing images[J], Opt. Precision Eng, 26, 3, pp. 723-732, (2018)
  • [5] JAYAWEERA S K., Signal Processing for Cognitive Radios[M], (2015)
  • [6] DONOHO D L., Compressed sensing[J], IEEE Transactions on Information Theory, 52, 4, pp. 1289-1306, (2006)
  • [7] TAO T., Robust uncertainty principles:exact signal reconstruction from highly incomplete frequency information[J], IEEE Transactions on Information Theory, 52, 2, pp. 489-509, (2006)
  • [8] BARANIUK RICHARD, A lecture on compressive sensing[J], IEEE Signal Processing Magazine, 24, 4, pp. 1-9, (2007)
  • [9] LIU J J,, ZHU J B, YAN F X,, Et al., Design of remote sensing imaging system based on compressive sensing[J], Systems Engineering and Electronics, 32, 8, pp. 1618-1623, (2010)
  • [10] ZHANG X D, Et al., MEMS-based super-resolution remote sensing system using compressive sensing[J], Optics Communications, 426, pp. 410-417, (2018)