Optical Image-to-Underwater Small Target Synthetic Aperture Sonar Image Translation Algorithm Based on Improved CycleGAN

被引:0
|
作者
Li B.-Q. [1 ,2 ]
Huang H.-N. [1 ,2 ]
Liu J.-Y. [1 ,2 ]
Li Y. [1 ,2 ]
机构
[1] Institute of Acoustics, Chinese Academy of Sciences, Beijing
[2] Key Laboratory of Science and Technology on Advanced Underwater Acoustic Signal Processing, Chinese Academy of Sciences, Beijing
来源
关键词
Cycle generative adversarial networks; Generative adversarial networks; Multiscale structural similarity index; Optical image-to-synthetic aperture sonar image translation; Selective dilated kernel networks;
D O I
10.12263/DZXB.20200712
中图分类号
学科分类号
摘要
The original CycleGAN show poor quality and time consuming in optical image to underwater small target synthetic aperture sonar image translation task. To address those problems, a novel convolution building block, SDK (Selective Dilated Kernel), is proposed. By stacking SDK blocks, a generator SDKNet is created. At the same time, Multiscale Cycle Consistent Loss Function (MS-CCLF) is proposed, which add the Multiscale Structural Similarity Index (MS-SSIM) between input images and reconstructed images. On our image translation dataset (OPT-SAS), the classification accuracy of our SM-CycleGAN is 4.64% higher than that of original CycleGAN. The generator parameters of SM-CycleGAN is 4.13MB lower than that of CycleGAN, and the time consuming of SM-CycleGAN is 0.143s less than that of CycleGAN. The experimental results show that SM -CycleGAN is more suitable for the translation task of optical image to small underwater target synthetic aperture sonar image. © 2021, Chinese Institute of Electronics. All right reserved.
引用
收藏
页码:1746 / 1753
页数:7
相关论文
共 28 条
  • [1] Hayes M P, Gough P T., Synthetic aperture sonar: a review of current status, IEEE Journal of Oceanic Engineering, 34, 3, pp. 207-224, (2009)
  • [2] Wang P, Chi C, Zhang Y, Et al., Fast imaging algorithm for downward-looking 3D synthetic aperture sonars, IET Radar, Sonar and Navigation, 14, 3, pp. 459-467, (2020)
  • [3] LIUJ Y, TANGJ S, SUNB S, Et al., A receiving-data-based motion compensation method of synthetic aperture sonar, Acta Electronica Sinica, 31, 1, pp. 131-134, (2003)
  • [4] Sun S B, Chen Y C, Qin L H, Et al., Inverse synthetic aperture sonar imaging of underwater vehicles utilizing 3-D rotations, IEEE Journal of Oceanic Engineering, 45, 2, pp. 563-576, (2020)
  • [5] Li Y, Tang S, Zhang R, Et al., Asymmetric GAN for unpaired image-to-image translation, IEEE Transactions on Image Processing, 28, 12, pp. 5881-5896, (2019)
  • [6] Lin J X, Xia Y C, Qin T, Et al., Conditional image-to-image translation, Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, pp. 5524-5532, (2018)
  • [7] Hinton G., Where do features come from? [J], Cognitive Science, 38, 6, pp. 1078-1101, (2014)
  • [8] LeCun Y, Bengio Y, Hinton G., Deep learning, Nature, 521, 7553, pp. 436-444, (2016)
  • [9] Schmidhuber J., Deep learning in neural networks: an overview, Neural Networks, 61, pp. 85-117, (2015)
  • [10] He Y Y, Li B Q., A combinatory form learning rate scheduling for deep learning model, Acta Automatica Sinica, 42, 6, pp. 953-958, (2016)