Application and influencing factors analysis of Pix2pix network in scattering imaging

被引:5
|
作者
Hu, Yongqiang [1 ]
Tang, Ziyi [1 ]
Hu, Jie [1 ]
Lu, Xuehua [1 ]
Zhang, Wenpeng [1 ]
Xie, Zhengwei [1 ]
Zuo, Haoyi [2 ]
Li, Ling [1 ]
Huang, Yijia [1 ]
机构
[1] Sichuan Normal Univ, Sch Phys & Elect Engn, Lab Micronano Opt, Chengdu 610101, Peoples R China
[2] Sichuan Univ, Coll Phys, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Pix2pix; Imaging; Dynamic scattering media; LAYERS; WAVES;
D O I
10.1016/j.optcom.2023.129488
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
The imaging accuracy of deep learning-based scattering imaging techniques depends largely on the network structure and the speckle data quality. Up to now, many schemes based on deep learning to achieve imaging through single-layer scattering medium have been proposed. However, the performance of these schemes is limited when the scattering medium is a thick multilayer or dynamic medium. At the same time, the influence of complex changes in scattering environment on the quality of speckle data is obscured. In this study, a scheme of Pix2pix network based on the Peak Signal-to-Noise Ratio (PSNR) loss function is proposed to reconstruct the images passing through dynamic and double-layer scattering media. The influence of physical factors such as light intensity, dynamic perturbations of scattering medium, and optical depth of scattering medium on network imaging are quantitatively analyzed. In order to analyze the influence of these factors on network imaging more objectively, a typical Dense-unet is also used to train. In the experiment, the imaging results of both networks exhibit the same varying trend with varying physical factors. In addition, the proposed Pix2pix network displays better performance compared to Dense-unet. This work is helpful to future imaging studies based on machine learning.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Automatic Characteristic Line Drawing Generation using Pix2pix
    Yanagida, Kazuki
    Gyohten, Keiji
    Ohki, Hidehiro
    Takami, Toshiya
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 155 - 162
  • [22] Retinal Blood Vessel Segmentation Using Pix2Pix GAN
    Popescu, Dan
    Deaconu, Mihaela
    Ichim, Loretta
    Stamatescu, Grigore
    2021 29TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2021, : 1173 - 1178
  • [23] MRI Scan Synthesis Methods Based on Clustering and Pix2Pix
    Baldini, Giulia
    Schmidt, Melanie
    Zaeske, Charlotte
    Caldeira, Liliana L.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, PT II, AIME 2024, 2024, 14845 : 109 - 125
  • [24] A Pix2Pix Architecture for Complete Offline Handwritten Text Normalization
    Barreiro-Garrido, Alvaro
    Ruiz-Parrado, Victoria
    Moreno, A. Belen
    Velez, Jose F.
    SENSORS, 2024, 24 (12)
  • [25] 基于pix2pix的数码迷彩方案研究
    冉建国
    刘珩
    张月
    指挥控制与仿真, 2022, 44 (03) : 116 - 121
  • [26] Known-plaintext cryptanalysis for a computational-ghost-imaging cryptosystem via the Pix2Pix generative adversarial network
    Liu, Xiangru
    Meng, Xiangfeng
    Wang, Yurong
    Yin, Yongkai
    Yang, Xiulun
    OPTICS EXPRESS, 2021, 29 (26) : 43860 - 43874
  • [27] Multitemporal SAR-to-Optical Image Translation Using Pix2Pix With Application to Vegetation Monitoring
    Amitrano, Donato
    IEEE ACCESS, 2024, 12 : 124402 - 124413
  • [28] Using Pix2Pix to Achieve the Spatial Refinement and Transformation of Taihu Stone
    Deng, Qiaoming
    Li, Xiaofeng
    Liu, Yubo
    Computational Design and Robotic Fabrication, 2023, Part F1309 : 359 - 370
  • [29] Feasibility of new fat suppression for breast MRI using pix2pix
    Mori, Mio
    Fujioka, Tomoyuki
    Katsuta, Leona
    Kikuchi, Yuka
    Oda, Goshi
    Nakagawa, Tsuyoshi
    Kitazume, Yoshio
    Kubota, Kazunori
    Tateishi, Ukihide
    JAPANESE JOURNAL OF RADIOLOGY, 2020, 38 (11) : 1075 - 1081
  • [30] Isogeometric multi-patch topology optimization based on pix2pix
    Hu, Qingyuan
    Meng, Xin
    You, Yangxiu
    FRONTIERS IN PHYSICS, 2023, 11