Scene-Embedded Generative Adversarial Networks for Semi-Supervised SAR-to-Optical Image Translation

被引:0
|
作者
Guo, Zhe [1 ]
Luo, Rui [1 ]
Cai, Qinglin [1 ]
Liu, Jiayi [1 ]
Zhang, Zhibo [1 ]
Mei, Shaohui [1 ]
机构
[1] Northwestern Polytech Univ, Sch Elect & Informat, Xian 710129, Peoples R China
基金
中国国家自然科学基金;
关键词
Optical imaging; Optical sensors; Radar polarimetry; Vectors; Optical losses; Generators; Optical fiber networks; Measurement; Generative adversarial networks; Visualization; Scene assist; scene information fusion; synthetic aperture radar (SAR)-to-optical image translation (S2OIT);
D O I
10.1109/LGRS.2024.3471553
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
SAR-to-optical image translation (S2OIT) improves the interpretability of SAR images, providing a clearer visual insight that can significantly enhance remote sensing applications. Compared to supervised S2OIT methods that are limited by the paired dataset, unsupervised methods have shown more advantages in practical applications. However, the existing unsupervised S2OIT approaches, designed for unpaired datasets, often struggle to generalize well to scenes that are significantly different from the training data, potentially leading to mistranslations in diverse scenarios. To address the above issues, we propose a scene-embedded generative adversarial network for semi-supervised S2OIT called ScE-GAN, which utilizes the scene category labels in addition to unpaired image dataset, thus effectively improving the robustness of S2OIT under different scenes without increasing complex network structure and learning cost. In particular, a scene information fusion generator (SIFG) is proposed to learn the relationship between the image and the scene directly through scene category guidance and multihead attention, enhancing its ability to adapt to scene changes. Moreover, a scene-assisted discriminator (SAD) is presented cooperating with the generator to ensure both image authenticity and scene accuracy. Extensive experiments on two challenging datasets SEN1-2 and QXS-SAROPT demonstrate that our method outperforms the state-of-the-art methods in both objective and subjective evaluations. Our code and more details are available at https://github.com/lr-dddd/ScE-GAN.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] Improved Conditional Generative Adversarial Networks for SAR-to-Optical Image Translation
    Zhan, Tao
    Bian, Jiarong
    Yang, Jing
    Dang, Qianlong
    Zhang, Erlei
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 279 - 291
  • [2] SAR-to-optical image translation by a variational generative adversarial network
    Zhao, Jiaqi
    Ni, Wenxin
    Zhou, Yong
    Chen, Ying
    Yang, Zhi
    Bian, Fuqiang
    REMOTE SENSING LETTERS, 2022, 13 (07) : 672 - 682
  • [3] Edge-Preserving Convolutional Generative Adversarial Networks for SAR-to-Optical Image Translation
    Guo, Jie
    He, Chengyu
    Zhang, Mingjin
    Li, Yunsong
    Gao, Xinbo
    Song, Bangyu
    REMOTE SENSING, 2021, 13 (18)
  • [4] SAR-to-Optical Image Translation Using Supervised Cycle-Consistent Adversarial Networks
    Wang, Lei
    Xu, Xin
    Yu, Yue
    Yang, Rui
    Gui, Rong
    Xu, Zhaozhuo
    Pu, Fangling
    IEEE ACCESS, 2019, 7 : 129136 - 129149
  • [5] Semi-supervised Remote Sensing Image Scene Classification Based on Generative Adversarial Networks
    Guo, Dongen
    Wu, Zechen
    Zhang, Yuanzheng
    Shen, Zhen
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2022, 15 (01)
  • [6] Semi-supervised Remote Sensing Image Scene Classification Based on Generative Adversarial Networks
    Dongen Guo
    Zechen Wu
    Yuanzheng Zhang
    Zhen Shen
    International Journal of Computational Intelligence Systems, 15
  • [7] KE-GAN: Knowledge Embedded Generative Adversarial Networks for Semi-Supervised Scene Parsing
    Qi, Mengshi
    Wang, Yunhong
    Qin, Jie
    Li, Annan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5232 - 5241
  • [8] HFGAN: A HETEROGENEOUS FUSION GENERATIVE ADVERSARIAL NETWORK FOR SAR-TO-OPTICAL IMAGE TRANSLATION
    Yu, Ning
    Ma, Ailong
    Zhong, Yanfei
    Gong, Xiaodong
    2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 2864 - 2867
  • [9] SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks-Optimization, Opportunities and Limits
    Reyes, Mario Fuentes
    Auer, Stefan
    Merkle, Nina
    Henry, Corentin
    Schmitt, Michael
    REMOTE SENSING, 2019, 11 (17)
  • [10] Galaxy Image Translation with Semi-supervised Noise-reconstructed Generative Adversarial Networks
    Lin, Qiufan
    Fouchez, Dominique
    Pasquet, Jerome
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5634 - 5641